From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Brian Cox <brian(dot)cox(at)ca(dot)com>, "pgsql-performance\(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Deleting millions of rows |
Date: | 2009-02-04 12:35:57 |
Message-ID: | 87iqnqmlaq.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> That's good if you're deleting most or all of the parent table, but
> what if you're deleting 100,000 values from a 10,000,000 row table?
> In that case maybe I'm better off inserting all of the deleted keys
> into a side table and doing a merge or hash join between the side
> table and the child table...
It would be neat if we could feed the queued trigger tests into a plan node
like a Materialize and use the planner to determine which type of plan to
generate.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff | 2009-02-04 13:06:12 | Re: SSD performance |
Previous Message | Rohan Pethkar | 2009-02-04 10:42:09 | Getting error while running DBT2 test for PostgreSQL |