From: | Brian Cox <brian(dot)cox(at)ca(dot)com> |
---|---|
To: | "Tom Lane [tgl(at)sss(dot)pgh(dot)pa(dot)us]" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Deleting millions of rows |
Date: | 2009-02-02 22:01:12 |
Message-ID: | 49876D28.5080403@ca.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Tom Lane [tgl(at)sss(dot)pgh(dot)pa(dot)us] wrote:
> It's the pending trigger list. He's got two trigger events per row,
> which at 40 bytes apiece would approach 4GB of memory. Apparently
> it's a 32-bit build of Postgres, so he's running out of process address
> space.
Yes, this is a 32 bit Postgres running on a 32 bit Linux. I assume that
the 2 triggers are due to the 2 "on delete cascade" FKs. Thanks for
explaining this bit of a mystery.
> TRUNCATE is the best solution if you want to get rid of the whole table
> contents. If you're deleting very many but not all rows, people tend
> to drop the FK constraints and re-establish them afterwards. Retail
> checking is just too slow.
Thanks also to you (and several others) for reminding me of TRUNCATE.
This will definitely work for what I was trying to do: reset this table
for more testing.
In production, the table on which I ran DELETE FROM grows constantly
with old data removed in bunches periodically (say up to a few 100,000s
of rows [out of several millions] in a bunch). I'm assuming that
auto-vacuum/analyze will allow Postgres to maintain reasonable
performance for INSERTs and SELECTs on it; do you think that this is a
reasonable assumption?
Thanks,
Brian
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2009-02-02 22:33:13 | Re: Deleting millions of rows |
Previous Message | Tom Lane | 2009-02-02 20:58:43 | Re: Deleting millions of rows |