From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | MichaelDBA <MichaelDBA(at)sqlexec(dot)com> |
Cc: | Imre Samu <pella(dot)samu(at)gmail(dot)com>, Oleksandr Shulgin <oleksandr(dot)shulgin(at)zalando(dot)de>, Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | Re: When to use PARTITION BY HASH? |
Date: | 2020-06-07 21:23:10 |
Message-ID: | CAApHDvq615Snu3bu7EyWP6z2WqhC=RZJW1dv4JSYvZVDQ_SCwA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-performance |
On Sun, 7 Jun 2020 at 23:41, MichaelDBA <MichaelDBA(at)sqlexec(dot)com> wrote:
> The article referenced below assumes a worst case scenario for bulk-loading with hash partitioned tables. It assumes that the values being inserted are in strict ascending or descending order with no gaps (like a sequence number incrementing by 1), thereby ensuring every partition is hit in order before repeating the process. If the values being inserted are not strictly sequential with no gaps, then the performance is much better. Obviously, what part of the tables and indexes are in memory has a lot to do with it as well.
In PostgreSQL 12, COPY was modified to support bulk-inserts for
partitioned tables. This did speed up many scenarios. Internally, how
this works is that we maintain a series of multi insert buffers, one
per partition. We generally only flush those buffers to the table when
the buffer for the partition fills. However, there is a sort of
sanity limit [1] on the number of multi insert buffers we maintain at
once and currently, that is 32. Technically we could increase that
limit, but there would still need to be a limit. Unfortunately, for
this particular case, since we're most likely touching between 199-799
other partitions before hitting the first one again, that will mean
that we really don't get any multi-inserts, which is likely the reason
why the performance is worse for hash partitioning.
With PG12 and for this particular case, you're likely to see COPY
performance drop quite drastically when going from 32 to 33
partitions. The code was more designed for hitting partitions more
randomly rather than in this sort-of round-robin way that we're likely
to get from hash partitioning on a serial column.
David
[1] https://github.com/postgres/postgres/blob/master/src/backend/commands/copy.c#L2569
From | Date | Subject | |
---|---|---|---|
Next Message | Rene Romero Benavides | 2020-06-07 22:37:01 | gdal, proj and other postgis dependencies missing in postgres repos |
Previous Message | Adrian Klaver | 2020-06-07 16:08:25 | Re: Question on full vacuum clearing waste space |
From | Date | Subject | |
---|---|---|---|
Next Message | Oleksandr Shulgin | 2020-06-08 08:40:15 | Re: When to use PARTITION BY HASH? |
Previous Message | MichaelDBA | 2020-06-07 11:41:28 | Re: When to use PARTITION BY HASH? |