Re: Savepoints in transactions for speed?

From: Bob Lunney <bob_lunney(at)yahoo(dot)com>
To: Mike Blackwell <mike(dot)blackwell(at)rrd(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Savepoints in transactions for speed?
Date: 2012-11-28 00:16:23
Message-ID: 2000E26E-35FC-4801-90F3-3CFBDA7B41DF@yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Mike,

Is there anything that the 1.5 million rows have in common that would allow you to use partitions? if so, you could load the new data into a partition at your leisure, start a transaction, alter the partition table with the old data to no longer inherit from the parent, alter the new partition table to
inherit from the parent, commit, then drop the old table. This operation would be very fast, the users probably won't even notice.

Bob Lunney

On Nov 27, 2012, at 4:04 PM, Mike Blackwell <mike(dot)blackwell(at)rrd(dot)com> wrote:

> I need to delete about 1.5 million records from a table and reload it in one transaction. The usual advice when loading with inserts seems to be group them into transactions of around 1k records. Committing at that point would leave the table in an inconsistent state. Would issuing a savepoint every 1k or so records negate whatever downside there is to keeping a transaction open for all 1.5 million records, or just add more overhead?
>
> The data to reload the table is coming from a Perl DBI connection to a different database (not PostgreSQL) so I'm not sure the COPY alternative applies here.
>
> Any suggestions are welcome.
>
>
> Mike

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Dave Crooke 2012-11-28 00:29:07 Re: Postgres configuration for 8 CPUs, 6 GB RAM
Previous Message Scott Marlowe 2012-11-27 23:42:10 Re: Hints (was Poor performance using CTE)