From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Philip Warner <pjw(at)rhyme(dot)com(dot)au> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Optimizer Question/Suggestion |
Date: | 2002-11-02 14:36:42 |
Message-ID: | 20328.1036247802@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Philip Warner <pjw(at)rhyme(dot)com(dot)au> writes:
> This comes about because we have a table with 800 rows, one more more of
> which are updated every second of most days. The result in normal usage is
> that the table contains about 10000 tuples one hour after vacuuming. Also,
> the databases tries to be 24x7, and the table concerned is a core table, so
> vacuum/full once per hour is not an option.
Why not do frequent non-full vacuums on only that table, perhaps every
five minutes or so? That's certainly the direction that development is
headed in (we just haven't automated the vacuuming yet).
> ISTM that if a table has a PK, then a bogus index scan should be introduced
> if a table has more than a 'RandomPageCost/SequentialPageCost' ratio of
> dead:live tuples.
The ratio would have to be higher than that, because ordinarily you
expect to get more than one tuple per sequential page read. But I think
this is going in the wrong direction anyway. Ideally we should never
let a table get so overloaded with dead space that this strategy would
be profitable.
BTW, the system does not actually have any stats about dead tuples.
What it knows about are live tuples and total disk pages occupied by
the table.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2002-11-02 15:06:35 | Re: move 0 behaviour |
Previous Message | Doug McNaught | 2002-11-02 14:33:29 | Re: Optimizer Question/Suggestion - numbers after |