From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Jesper Krogh <jesper(at)krogh(dot)cc>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Random penalties on GIN index updates? |
Date: | 2009-10-22 03:16:28 |
Message-ID: | 603c8f070910212016t3073b73cw3318787f81e42ad7@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Oct 21, 2009 at 2:35 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Jesper Krogh <jesper(at)krogh(dot)cc> writes:
>> What I seems to miss a way to make sure som "background" application is
>> the one getting the penalty, so a random user doing a single insert
>> won't get stuck. Is that doable?
>
> You could force a vacuum every so often, but I don't think that will
> help the locking situation. You really need to back off work_mem ---
> 512MB is probably not a sane global value for that anyway.
Yeah, it's hard to imagine a system where that doesn't threaten all
kinds of other bad results. I bet setting this to 4MB will make this
problem largely go away.
Arguably we shouldn't be using work_mem to control this particular
behavior, but...
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Jesper Krogh | 2009-10-22 04:57:49 | Re: Random penalties on GIN index updates? |
Previous Message | Nikolas Everett | 2009-10-22 02:47:45 | Re: optimizing query with multiple aggregates |