From: | Joe Conway <mail(at)joeconway(dot)com> |
---|---|
To: | "James Pang (chaolpan)" <chaolpan(at)cisco(dot)com>, Jim Mlodgenski <jimmy76(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "pgsql-performance(at)lists(dot)postgresql(dot)org" <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | Re: alter table xxx set unlogged take long time |
Date: | 2022-07-27 14:35:02 |
Message-ID: | e1920272-57a6-7c5a-924d-be26065ad9ad@joeconway.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 7/26/22 08:59, James Pang (chaolpan) wrote:
> We use JDBC to export data into csv ,then copy that to Postgres.
> Multiple sessions working on multiple tables. If not set unlogged , how
> to make COPY run fast ? possible to start a transaction include all of
> these “truncate table xxx; copy table xxxx; create index on tables….”
> With wal_level=minimal, is it ok to make copy and create index without
> logging ?
Not sure if it would work for you, but perhaps a usable strategy would
be to partition the existing large table on something (e.g. a new column
like batch number?).
Then (completely untested) I *think* you could create the "partition"
initially as a free standing unlogged table, load it, index it, switch
to logged, and then attach it to the partitioned table.
Perhaps you could also have a background job that periodically
aggregates the batch partitions into larger buckets to minimize the
overall number of partitions.
--
Joe Conway
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2022-07-27 14:46:23 | Re: alter table xxx set unlogged take long time |
Previous Message | Justin Pryzby | 2022-07-27 13:38:44 | Re: Postgresql 14 partitioning advice |