From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Mike Blackwell <mike(dot)blackwell(at)rrd(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Savepoints in transactions for speed? |
Date: | 2012-11-28 01:16:28 |
Message-ID: | CAGTBQpZ-NLHeZPnD9m2O-UZfMba14t1aU2K73c0tOvz_w232LQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Nov 27, 2012 at 10:08 PM, Mike Blackwell <mike(dot)blackwell(at)rrd(dot)com> wrote:
>
> > Postgresql isn't going to run out of resources doing a big transaction, in the way some other databases will.
>
> I thought I had read something at one point about keeping the transaction size on the order of a couple thousand because there were issues when it got larger. As that apparently is not an issue I went ahead and tried the DELETE and COPY in a transaction. The load time is quite reasonable this way.
Updates, are faster if batched, if your business logic allows it,
because it creates less bloat and creates more opportunities for with
HOT updates. I don't think it applies to inserts, though, and I
haven't heard it either.
In any case, if your business logic doesn't allow it (and your case
seems to suggest it), there's no point in worrying.
From | Date | Subject | |
---|---|---|---|
Next Message | Craig Ringer | 2012-11-28 02:17:36 | Re: Hints (was Poor performance using CTE) |
Previous Message | Mike Blackwell | 2012-11-28 01:08:25 | Re: Savepoints in transactions for speed? |