From: | Oleg Bartunov <oleg(at)sai(dot)msu(dot)su> |
---|---|
To: | Jesper Krogh <jesper(at)krogh(dot)cc> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: ginfastupdate.. slow |
Date: | 2011-09-15 19:08:08 |
Message-ID: | Pine.LNX.4.64.1109152306150.26195@sn.sai.msu.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Jesper,
are you sure you have autovacuum configured properly, so posting lists don't
grow too much. It's true, that concurrency of posting lists isn't good, since
they all appended.
Oleg
On Thu, 15 Sep 2011, Jesper Krogh wrote:
> Hi List.
>
> This is just an "observation" I'll try to reproduce it in a test set later.
>
> I've been trying to performancetune a database system which does
> a lot of updates on GIN indexes. I currently have 24 workers running
> executing quite cpu-intensive stored procedures that helps generate
> the body for the gin index (full-text-search).
>
> The system is all memory resident for the data that gets computed on
> and there is a 1GB BBWC before data hits the disk-system. The iowait
> is 5-10% while running.
>
> The system is nearly twice as fast with fastupdate=off as with fastupdate=on.
> Benchmark done on a 9.0.latest
>
> System AMD Opteron, 4x12 cores @ 2.2ghz, 128GB memory.
>
> It is probably not as surprising as it may seem, since the "fastupdate" is
> about batching up in a queue for processing later, but when the "later"
> arrives, concurrency seems to stop.
>
> Is it worth a documentation comment?
>
>
Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru)
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg(at)sai(dot)msu(dot)su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83
From | Date | Subject | |
---|---|---|---|
Next Message | Alex Hunsaker | 2011-09-15 21:41:02 | Re: Patch: Perl xsubpp |
Previous Message | Alexander Korotkov | 2011-09-15 18:46:29 | Re: Double sorting split patch |