From: | Jesper Krogh <jesper(at)krogh(dot)cc> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: issue with gininsert under very high load |
Date: | 2014-02-14 07:06:40 |
Message-ID: | 52FDC080.80605@krogh.cc |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 14/02/14 00:49, Tom Lane wrote:
> Andres Freund <andres(at)2ndquadrant(dot)com> writes:
>> On 2014-02-13 16:15:42 -0500, Tom Lane wrote:
>>> Something like the attached? Can somebody who's seen this problem confirm
>>> this improves matters?
>> Hm. Won't that possiby lead to the fast tuple list growing unboundedly?
>> I think we would need to at least need to stop using the fast tuple
>> mechanism during gininsert() if it's already too big and do plain
>> inserts.
> No, because we've already got a process working on cleaning it out.
>
> In any case, this needs some testing to see if it's an improvement
> or not.
Having some real-world experience with the fastupdate mechanism. Under
concurrent load
it behaves really bad. Random processes waiting for cleanup (or
competing with cleanup) is
going to see latency-spikes, because they magically hit that corner,
thus reverting to plain
inserts if it cannot add to the pending list, will not remove this
problem, but will
make it only hit the process actually doing the cleanup.
The build in mechanism, that cleanup is i cost paid by the process who
happened to
fill the pendinglist, is really hard to deal with in production. More
control is appreciated,
perhaps even an explicit flush-mechanism.. I'd like to batch up
inserts during one transaction only
and flush on commit.
--
Jesper - with fastupdate turned off due to above issues.
--
Jesper
From | Date | Subject | |
---|---|---|---|
Next Message | Erik Rijkers | 2014-02-14 08:23:45 | Re: Changeset Extraction v7.6 |
Previous Message | Jerry Sievers | 2014-02-14 05:28:45 | Re: HBA files w/include support? |