From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Ogden Brash <info(at)litika(dot)com> |
Cc: | Jerry Sievers <gsievers19(at)comcast(dot)net>, pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: Modification of data in base folder and very large tables |
Date: | 2019-10-11 01:08:54 |
Message-ID: | CAMkU=1xsxdDG9VL+-BN=YtJT6==W2y5iRwQ5Btz6NyavSmmzAw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Oct 10, 2019 at 3:40 AM Ogden Brash <info(at)litika(dot)com> wrote:
> If each of the tables has about 3+ billion rows, the index is still going
> to be pretty large and spread over many files. In the source database that
> was backed up, the primary key sequence was sequentially assigned and
> written, but as various posprocessing operations were applied and the rows
> modified, the data, is probably in a relatively random evenly distributed
> order. So I now believe that all the files that are being constantly
> touched are not actually the files for the data rows, but the files for the
> index, and as the system is reading the data it is jumping around
> recreating the index for the primary key based on the random order of the
> dta rows it reads.
>
> Sound plausible? I'm still a postgres noob.
>
Yes, perfectly plausible.
>
> As an experiment, I am in the process of clustering the source database
> tables by the primary key constraint. I am hoping that if I redo the
> pg_dump after that, it will contain the records in more-or-less primary key
> order and on the subsequent pg_restore it should not have to spend the vast
> majority of the time on reading and seeking.
>
> It is surprising to me that the cluster operations (which also have to
> churn through the entire index and all the records) are going *much* faster
> than pg_restore.
>
The cluster gets to lock the table against any concurrent changes, and then
rebuild the indexes from scratch in bulk. You could have done more-or-less
the same thing just by dropping the primary key while doing the load. Alas,
there is no way to do that without losing the work already done. When you
do a data-only pg_restore, you are dis-inviting it from doing such
optimizations. Isn't the whole point of data-only restore is that you
leave the table open for other business while it happens? (Or maybe there
is some other point to it that I am missing--if you want some halfway
measure between creating the table from scratch, and leaving it completely
open for business as usual, then you have to evaluate each of those steps
and implement them yourself, there is no way that pg_restore can reasonably
guess which constraints and indexes it is allowed to drop and which it is
not).
Perhaps
https://www.postgresql.org/docs/current/populate.html#POPULATE-RM-INDEXES
should
mention the index assocated with primary keys, since dropping them does
require a different syntax and they might be overlooked.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Ivan Voras | 2019-10-12 14:39:56 | Optimising a two column OR check |
Previous Message | Andrew Gierth | 2019-10-10 12:05:24 | Re: Modification of data in base folder and very large tables |