From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Brian Cox <brian(dot)cox(at)ca(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Deleting millions of rows |
Date: | 2009-02-02 23:26:36 |
Message-ID: | 603c8f070902021526x67d34095gff54c36295f504e0@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> It's the pending trigger list. He's got two trigger events per row,
> which at 40 bytes apiece would approach 4GB of memory. Apparently
> it's a 32-bit build of Postgres, so he's running out of process address
> space.
>
> There's a TODO item to spill that list to disk when it gets too large,
> but the reason nobody's done it yet is that actually executing that many
> FK check trigger events would take longer than you want to wait anyway.
Have you ever given any thought to whether it would be possible to
implement referential integrity constraints with statement-level
triggers instead of row-level triggers? IOW, instead of planning this
and executing it N times:
DELETE FROM ONLY <fktable> WHERE $1 = fkatt1 [AND ...]
...we could join the original query against fktable with join clauses
on the correct pairs of attributes and then execute it once.
Is this insanely difficult to implement?
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff | 2009-02-03 17:54:53 | Re: SSD performance |
Previous Message | Scott Marlowe | 2009-02-02 22:33:13 | Re: Deleting millions of rows |