From: | "Matthew T(dot) O'Connor" <matthew(at)zeut(dot)net> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: CREATE INDEX speeds up query on 31 row table ... |
Date: | 2004-09-30 20:29:23 |
Message-ID: | 415C6CA3.4050700@zeut.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greg Stark wrote:
>"Marc G. Fournier" <scrappy(at)postgresql(dot)org> writes:
>
>
>>Its a *very* busy table ... and running on a 7.4.0 database ...
>>
>>
>I wonder how many tuples are really in this table. Almost all of the 190ms is
>spent in the two sequential scans of it. Which is an awful lot of time for a
>32 tuple table.
>
>You say it's "*very* busy" is it possible there are hundreds or thousands of
>tuples in there that are uncommitted or committed after this query starts?
>
>Alternatively, try a "vacuum full". If that helps then autovacuum is having
>trouble keeping up with the rate of creation of dead tuples. You would have to
>increase the FSM parameters.
>
>
There are scenarios where pg_autovacuum does have trouble keeping up.
The answer is to set more aggressive autovacuum thresholds.
Unfortunately the version of pg_autovacuum in contrib doens't support
table specific thresholds so you would have to set these cluster wide
which you may or may not want to do.
Alternatively you can keep the settings as is and have cron do a daily
vacuum full on that table.
>Presumably autovacuum has warnings when it detects that scenario?
>
>
>
Unfortunately not. pg_autovacuum is pretty simple.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-09-30 20:36:02 | Re: pg_upgrade project: high-level design proposal of in-place upgrade facility |
Previous Message | Bruce Momjian | 2004-09-30 20:25:56 | Re: More pgindent bizarreness |