Re: gin performance issue.

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Marc Mamin <M(dot)Mamin(at)intershop(dot)de>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: gin performance issue.
Date: 2016-02-08 19:16:00
Message-ID: CAMkU=1yqnHagA-bA6rU0Nmw8DwB8pTtQYcLoVecRi=GU2TR=oQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, Feb 8, 2016 at 2:21 AM, Marc Mamin <M(dot)Mamin(at)intershop(dot)de> wrote:
>
> - auto vacuum will not run as these are insert only tables
> - according to this post, auto analyze would also do the job:
> http://postgresql.nabble.com/Performance-problem-with-gin-index-td5867870.html
> It seems that this information is missing in the doc
>
> but it sadly neither triggers in our case as we have manual analyzes called during the dataprocesssing just following the imports.
> Manual vacuum is just too expensive here.
>
> Hence disabling fast update seems to be our only option.

Does disabling fast update cause problems? I always start with
fastupdate disabled, and only turn on if it I have a demonstrable
problem with it being off.

I would think "off" is likely to be better for you. You say each
distinct key only appears in 2.7 rows. So you won't get much benefit
from aggregating together all the new rows for each key before
updating the index for that key, as there is very little to aggregate.

Also, you say the inserts come in bulk. It is generally a good thing
to slow down bulk operations by making them clean up their own messes,
for the sake of everyone else.

> I hope this problem will help push up the 9.5 upgrade on our todo list :)
>
> Ideally, we would then like to flush the pending list inconditionally after the imports.
> I guess we could achieve something approaching while modifying the analyze scale factor and gin_pending_list_limit
> before/after the (bulk) imports, but having the possibility to flush it per SQL would be better.
> Is this a reasonable feature wish?

That feature has already been committed for the 9.6 branch.

> And a last question: how does the index update work with bulk (COPY) inserts:
> without pending list: is it like a per row trigger or will the index be cared of afterwards ?

Done for each row.

> with small pending lists : is there a concurrency problem, or can both tasks cleanly work in parallel ?

I don't understand the question. What are the two tasks you are
referring to? Do you have multiple COPY running at the same time in
different processes?

Cheers,

Jeff

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2016-02-08 22:58:10 Re: bad COPY performance with NOTIFY in a trigger
Previous Message Jeff Janes 2016-02-08 18:13:46 Re: Bitmap and-ing between btree and gin?