From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Jeff Davis <pgsql(at)j-davis(dot)com> |
Cc: | Mike Blackwell <mike(dot)blackwell(at)rrd(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Savepoints in transactions for speed? |
Date: | 2012-11-29 03:48:32 |
Message-ID: | CAGTBQpa5KzE1C4TaRDvaEwiLoO800brWOzJH_t8diEH7trq6Ww@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Nov 28, 2012 at 8:28 PM, Jeff Davis <pgsql(at)j-davis(dot)com> wrote:
>
> The main problem with a long-running delete or update transaction is
> that the dead tuples (deleted tuples or the old version of an updated
> tuple) can't be removed until the transaction finishes. That can cause
> temporary "bloat", but 1.5M records shouldn't be noticeable.
Not really that fast if you have indices (and who doesn't have a PK or two).
I've never been able to update (update) 2M rows in one transaction in
reasonable times (read: less than several hours) without dropping
indices. Doing it in batches is way faster if you can't drop the
indices, and if you can leverage HOT updates.
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Kirkwood | 2012-11-29 04:30:52 | Re: Optimize update query |
Previous Message | Niels Kristian Schjødt | 2012-11-29 03:32:11 | Re: Optimize update query |