From: | Jesper Krogh <jesper(at)krogh(dot)cc> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Random penalties on GIN index updates? |
Date: | 2009-10-22 04:57:49 |
Message-ID: | 4ADFE64D.1080608@krogh.cc |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Robert Haas wrote:
> On Wed, Oct 21, 2009 at 2:35 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Jesper Krogh <jesper(at)krogh(dot)cc> writes:
>>> What I seems to miss a way to make sure som "background" application is
>>> the one getting the penalty, so a random user doing a single insert
>>> won't get stuck. Is that doable?
>> You could force a vacuum every so often, but I don't think that will
>> help the locking situation. You really need to back off work_mem ---
>> 512MB is probably not a sane global value for that anyway.
>
> Yeah, it's hard to imagine a system where that doesn't threaten all
> kinds of other bad results. I bet setting this to 4MB will make this
> problem largely go away.
>
> Arguably we shouldn't be using work_mem to control this particular
> behavior, but...
I came from Xapian, where you only can have one writer process, but
batching up in several GB's improved indexing performance dramatically.
Lowering work_mem to 16MB gives "batches" of 11.000 documents and stall
between 45 and 90s. ~ 33 docs/s
--
Jesper
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2009-10-22 06:56:23 | Re: maintain_cluster_order_v5.patch |
Previous Message | Robert Haas | 2009-10-22 03:16:28 | Re: Random penalties on GIN index updates? |