From: | Christopher Browne <cbbrowne(at)acm(dot)org> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: R: R: slow seqscan after vacuum analize |
Date: | 2004-02-05 04:18:39 |
Message-ID: | m3d68unops.fsf@wolfe.cbbrowne.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
In the last exciting episode, eddy(at)axa(dot)it ("Edoardo Ceccarelli") wrote:
> Yes, you are right but it wasn't the case this time, I have run the
> explain plenty of times with same results. I think that the reason
> was that I made a simple VACUUM, after a VACUUM FULL ANALYZE (1h!!)
> things are ok
It sounds as though you weren't vacuuming ("just plain vacuum") often
enough.
What tends to go wrong is when the table winds up with a LOT of empty
space due to there being a lot of updates to the table without dead
tuples being cleaned out. The table winds up big, with no way to
shrink it without the cost of a VACUUM FULL.
If you vacuumed more often, the size of the table would likely stay
smaller which is sure to be helpful.
Another factor worth considering: If a few values are very common in
the field you are selecting on, then the query optimizer can get
convinced (wrongly) that a Seq Scan is the best choice. Using ALTER
TABLE T ALTER COLUMN C SET STATISTICS [some value] to increase the
number of "bins" can be helpful in such cases. (My pet theory is that
the present default value of 10 is a little low, and that a lot of
optimizer errors might be resolved by bumping it up a bit...)
--
(format nil "~S(at)~S" "cbbrowne" "ntlug.org")
http://www.ntlug.org/~cbbrowne/sgml.html
But what can you do with it? -- ubiquitous cry from Linux-user
partner. -- Andy Pearce, <ajp(at)hpopd(dot)pwd(dot)hp(dot)com>
From | Date | Subject | |
---|---|---|---|
Next Message | Iain | 2004-02-05 06:32:29 | Re: R: slow seqscan after vacuum analize |
Previous Message | Edoardo Ceccarelli | 2004-02-05 02:18:31 | R: R: slow seqscan after vacuum analize |