From: | "Simon Riggs" <simon(at)2ndquadrant(dot)com> |
---|---|
To: | "Csaba Nagy" <nagy(at)ecircle-ag(dot)com> |
Cc: | "Postgres general mailing list" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Delete/update with limit |
Date: | 2007-07-23 21:09:52 |
Message-ID: | 1185224992.4284.374.camel@ebony.site |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 2007-07-23 at 17:56 +0200, Csaba Nagy wrote:
> Now I don't put too much hope I can convince anybody that the limit on
> the delete/update commands has valid usage scenarios, but then can
> anybody help me find a good solution to chunk-wise process such a buffer
> table where insert speed is the highest priority (thus no indexes, the
> minimum of fields), and batch processing should still work fine with big
> table size, while not impacting at all the inserts, and finish in short
> time to avoid long running transactions ? Cause I can't really think of
> one... other than our scheme with the delete with limit + trigger +
> private temp table thing.
Use partitioning: don't delete, just drop the partition after a while.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | longlong | 2007-07-24 02:47:37 | about c# and postgresql |
Previous Message | Sibte Abbas | 2007-07-23 20:38:45 | Re: [HACKERS] 8.2.4 signal 11 with large transaction |