Savepoints in transactions for speed?

From: Mike Blackwell <mike(dot)blackwell(at)rrd(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Savepoints in transactions for speed?
Date: 2012-11-27 22:04:42
Message-ID: CANPAkgs_LvZV-fabTNN-ahiVcwSBAyOnArw1v+QbCePg7on6ZA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I need to delete about 1.5 million records from a table and reload it in
one transaction. The usual advice when loading with inserts seems to be
group them into transactions of around 1k records. Committing at that
point would leave the table in an inconsistent state. Would issuing a
savepoint every 1k or so records negate whatever downside there is to
keeping a transaction open for all 1.5 million records, or just add more
overhead?

The data to reload the table is coming from a Perl DBI connection to a
different database (not PostgreSQL) so I'm not sure the COPY alternative
applies here.

Any suggestions are welcome.

Mike

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Richard Huxton 2012-11-27 22:52:20 Re: Savepoints in transactions for speed?
Previous Message Scott Marlowe 2012-11-27 19:53:45 Re: Postgres configuration for 8 CPUs, 6 GB RAM