From: | Andres Freund <andres(at)2ndquadrant(dot)com> |
---|---|
To: | Jesper Krogh <jesper(at)krogh(dot)cc> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: issue with gininsert under very high load |
Date: | 2014-02-14 12:40:14 |
Message-ID: | 20140214124014.GL4910@awork2.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2014-02-14 08:06:40 +0100, Jesper Krogh wrote:
> On 14/02/14 00:49, Tom Lane wrote:
> >Andres Freund <andres(at)2ndquadrant(dot)com> writes:
> >>On 2014-02-13 16:15:42 -0500, Tom Lane wrote:
> >>>Something like the attached? Can somebody who's seen this problem confirm
> >>>this improves matters?
> >>Hm. Won't that possiby lead to the fast tuple list growing unboundedly?
> >>I think we would need to at least need to stop using the fast tuple
> >>mechanism during gininsert() if it's already too big and do plain
> >>inserts.
> >No, because we've already got a process working on cleaning it out.
> >
> >In any case, this needs some testing to see if it's an improvement
> >or not.
>
> Having some real-world experience with the fastupdate mechanism. Under
> concurrent load
> it behaves really bad. Random processes waiting for cleanup (or competing
> with cleanup) is
> going to see latency-spikes, because they magically hit that corner, thus
> reverting to plain
> inserts if it cannot add to the pending list, will not remove this problem,
> but will
> make it only hit the process actually doing the cleanup.
Yea, this is only a part of fixing fastupdate. Limiting the size of the
fastupdate list to something more reasonable is pretty important as
well. Not competing around cleanup will make cleanup much faster though,
so I am not that concerned about the latency spikes it causes once it's
limited in size and done non-concurrently.
> The build in mechanism, that cleanup is i cost paid by the process who
> happened to fill the pendinglist, is really hard to deal with in
> production. More control is appreciated, perhaps even an explicit
> flush-mechanism.. I'd like to batch up inserts during one transaction
> only and flush on commit.
That doesn't seem likely to work with a reasonable amount of effort. The
fastupdate list is shared across all processes, so one backend will
always pay the price for several others.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Florian Pflug | 2014-02-14 12:52:45 | Re: Memory ordering issue in LWLockRelease, WakeupWaiters, WALInsertSlotRelease |
Previous Message | Andres Freund | 2014-02-14 12:36:34 | Re: Memory ordering issue in LWLockRelease, WakeupWaiters, WALInsertSlotRelease |