| From: | Sam Mason <sam(at)samason(dot)me(dot)uk> |
|---|---|
| To: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: Chunk Delete |
| Date: | 2007-11-15 14:02:25 |
| Message-ID: | 20071115140225.GB1955@frubble.xen.chris-lamb.co.uk |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On Thu, Nov 15, 2007 at 02:46:09PM +0100, Alexander Staubo wrote:
> On 11/15/07, Abraham, Danny <danny_abraham(at)bmc(dot)com> wrote:
> > With Oracle we do it with: delete ,tname> where <cond> and rownum < Y;
>
> You could create a temporary sequence:
>
> create temporary sequence foo_seq;
> delete from foos where nextval('foo_seq') < 50000;
>
> I'm not sure how fast nextval() is, even on temporary sequences; but
> it should be reasonably fast.
That's not going to do anything very useful after VACUUM has been run
will it? VACUUM will leave lots of empty slots within a page, that
subsequent INSERTs will populate. I suppose that you could cluster the
table first on some data column, and then do your delete trick. But
If the OP really has got a billion rows they're not going to want to
cluster it very regularly.
This basically goes back to the fundamental issue that a relation has no
implicit order (it sounds as though Oracle had an implicit one, but that
assumption doesn't hold with PG or in general).
Maybe partitioning could help here.
Sam
| From | Date | Subject | |
|---|---|---|---|
| Next Message | dcrespo | 2007-11-15 14:02:35 | Query Performance Test |
| Previous Message | Albe Laurenz | 2007-11-15 14:02:13 | Re: Enforcing Join condition |