From: | Thomas <englim(at)pc(dot)jaring(dot)my> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Postgresqlism & Vacuum? |
Date: | 2000-04-14 05:13:12 |
Message-ID: | 38F6A8E8.1529E76E@pc.jaring.my |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Ed Loehr wrote:
> ... it is a well-known "postgresqlism"
> that you should consider running vacuum analyze at least nightly, possibly
> more frequently. [I run it hourly.]
I think there must be something wrong with the optimiser that it's
"postgresqlism" that you must vacuum analyze frequently. Just as an example,
for Clipper (dBase compiler), it's Clipperism that you must re-index if you
cannot locate some records just because the indexing module screws up.
For large 24x7 installations, it's impossible to vacuum nightly because when
postgresql is vacuuming the table is locked up, to the end-user the database
has already hung.
There has been effort to speed up the vacuuming process, but this isn't the
cure. I believe the fault lies on the optimizer.
For eg, in Bruce Momjian's FAQ 4.9:
PostgreSQL does not automatically maintain statistics. One has to make
an explicit vacuum call to update the statistics. After statistics are
updated, the optimizer knows how many rows in the table, and can
better decide if it should use indices. Note that the optimizer does
not use indices in cases when the table is small because a sequential
scan would be faster.
Why save on micro-seconds to use sequential scan when the table is small and
later 'forgets' that the table is now big because you didn't vacuum analyze?
Why can't the optimizer just use indexes when they are there and not
'optimize' for special cases when the table is small to save micro-seconds?
Thomas
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Snow | 2000-04-14 05:44:38 | Re: Postgresqlism & Vacuum? |
Previous Message | Aage J. Skjolingstad | 2000-04-14 03:18:26 | Re: Remove fields |