| From: | Greg Smith <greg(at)2ndQuadrant(dot)com> | 
|---|---|
| To: | Rob Emery <re-pgsql(at)codeweavers(dot)net> | 
| Cc: | pgsql-performance(at)postgresql(dot)org | 
| Subject: | Re: Deleting Rows From Large Tables | 
| Date: | 2013-05-20 02:55:28 | 
| Message-ID: | 519990A0.6030402@2ndQuadrant.com | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-performance | 
On 5/17/13 7:26 AM, Rob Emery wrote:
> I can keep decreasing the size of
> the window I'm deleting but I feel I must be doing something either
> fundamentally wrong or over-complicating this enormously.
I've had jobs like this where we ended up making the batch size cover 
only 4 hours at a time.  Once you've looked at the EXPLAIN plans for the 
row selection criteria and they're reasonable, dropping the period 
that's deleted per pass is really the only thing you can do.  Do some 
DELETEs, then pause to let the disk cache clear; repeat.
The other useful thing to do here is get very aggressive about settings 
for shared_buffers, checkpoint_segments, and checkpoint_timeout.  I'll 
normally push for settings like 8GB/256/15 minutes when doing this sort 
of thing.  The usual situation with a checkpoint every 5 minutes may not 
be feasible when you've got this type of work going on in the background.
-- 
Greg Smith   2ndQuadrant US    greg(at)2ndQuadrant(dot)com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Greg Smith | 2013-05-20 03:00:59 | Re: Reliability with RAID 10 SSD and Streaming Replication | 
| Previous Message | Greg Smith | 2013-05-20 02:44:16 | Re: Hardware suggestions for maximum read performance |