From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Vacuum: allow usage of more than 1GB of work mem |
Date: | 2016-09-05 20:58:17 |
Message-ID: | CAGTBQpY4n+8kBsmD2NYy9jmr5XWipyet4pCGbHFm4Z97kXVSUQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Sep 5, 2016 at 5:36 PM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> On 5 September 2016 at 15:50, Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
>> On Sun, Sep 4, 2016 at 3:46 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>>> On 3 September 2016 at 04:25, Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
>>>> The patch also makes vacuum free the dead_tuples before starting
>>>> truncation. It didn't seem necessary to hold onto it beyond that
>>>> point, and it might help give the OS more cache, especially if work
>>>> mem is configured very high to avoid multiple index scans.
>>>
>>> How long does that part ever take? Is there any substantial gain from this?
>>>
>>> Lets discuss that as a potential second patch.
>>
>> In the test case I mentioned, it takes longer than the vacuum part itself.
>
> Please provide a test case and timings so we can see what's happening.
The referenced test case is the one I mentioned on the OP:
- createdb pgbench
- pgbench -i -s 4000 pgbench
- psql pgbench -c 'delete from pgbench_accounts;'
- vacuumdb -v -t pgbench_accounts pgbench
fsync=off, autovacuum=off, maintainance_work_mem=4GB
From what I remember, it used ~2.7GB of RAM up until the truncate
phase, where it freed it. It performed a single index scan over the
PK.
I don't remember timings, and I didn't take them, so I'll have to
repeat the test to get them. It takes all day and makes my laptop
unusably slow, so I'll post them later, but they're not very
interesting. The only interesting bit is that it does a single index
scan instead of several, which on TB-or-more tables it's kinda nice.
Btw, without a further patch to prefetch pages on the backward scan
for truncate, however, my patience ran out before it finished
truncating. I haven't submitted that patch because there was an
identical patch in an older thread that was discussed and more or less
rejected since it slightly penalized SSDs. While I'm confident my
version of the patch is a little easier on SSDs, I haven't got an SSD
at hand to confirm, so that patch is still waiting on the queue until
I get access to an SSD. The tests I'll make include that patch, so the
timing regarding truncate won't be representative of HEAD (I really
can't afford to run the tests on a scale=4000 pgbench without that
patch, it crawls, and smaller scales don't fill the dead_tuples
array).
From | Date | Subject | |
---|---|---|---|
Next Message | Steve Singer | 2016-09-05 21:35:32 | Re: Logical Replication WIP |
Previous Message | Andres Freund | 2016-09-05 20:47:29 | Re: pg_sequence catalog |