From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Brian Cox <brian(dot)cox(at)ca(dot)com> |
Cc: | "Tom Lane [tgl(at)sss(dot)pgh(dot)pa(dot)us]" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Deleting millions of rows |
Date: | 2009-02-02 22:33:13 |
Message-ID: | dcc563d10902021433v5398d5deo809c9ba0ae9a38e5@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox <brian(dot)cox(at)ca(dot)com> wrote:
> In production, the table on which I ran DELETE FROM grows constantly with
> old data removed in bunches periodically (say up to a few 100,000s of rows
> [out of several millions] in a bunch). I'm assuming that auto-vacuum/analyze
> will allow Postgres to maintain reasonable performance for INSERTs and
> SELECTs on it; do you think that this is a reasonable assumption?
Yes, as long as you're deleting a small enough percentage that it
doesn't get bloated (100k of millions is a good ratio) AND autovacuum
is running AND you have enough FSM entries to track the dead tuples
you're gold.
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2009-02-02 23:26:36 | Re: Deleting millions of rows |
Previous Message | Brian Cox | 2009-02-02 22:01:12 | Re: Deleting millions of rows |