From: | "Ed L(dot)" <pgsql(at)bluepolka(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Interpreting vacuum verbosity |
Date: | 2004-05-07 16:32:05 |
Message-ID: | 200405071032.05479.pgsql@bluepolka.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Friday May 7 2004 9:09, Tom Lane wrote:
> "Ed L." <pgsql(at)bluepolka(dot)net> writes:
> > I guess the activity just totally outran the ability of autovac to keep
> > up.
>
> Could you have been bit by autovac's bug with misreading '3e6' as '3'?
> If you don't have a recent version it's likely to fail to vacuum large
> tables often enough.
No, our autovac logs the number of changes (upd+del for vac, upd+ins+del for
analyze) on each round of checks, and we can see it was routinely
performing when expected. The number of updates/deletes just far exceeded
the thresholds. Vac threshold was 2000, and at times there might be
300,000 outstanding changes in the 10-30 minutes between vacuums.
Given the gradual performance degradation we saw over a period of days if
not weeks, and the extremely high numbers of unused tuples, I'm wondering
if there is something like a data fragmentation problem occurring in which
we're having to read many many disk pages to get just a few tuples off each
page? This cluster has 3 databases (2 nearly idle) with a total of 600
tables (about 300 in the active database). Gzipped dumps are 1.7GB.
max_fsm_relations = 1000 and max_fsm_pages = 10000. The pattern of ops is
a continuous stream of inserts, sequential scan selects, and deletes.
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Harrison | 2004-05-07 16:43:16 | any experience with multithreaded pg apps? |
Previous Message | Shachar Shemesh | 2004-05-07 15:29:23 | Re: How can I do conditional 'drop table' in Postgres |