From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Carlos Moreno <moreno_pg(at)mochima(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Possible explanations for catastrophic performance deterioration? |
Date: | 2007-09-23 20:57:59 |
Message-ID: | 20070923205759.GD5679@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Carlos Moreno wrote:
> That is: the first time I run the query, it has to go through the
> disk; in the normal case it would have to read 100MB of data, but due
> to bloating, it actually has to go through 2GB of data. Ok, but
> then, it will load only 100MB (the ones that are not "uncollected
> disk garbage") to memory. The next time that I run the query, the
> server would only need to read 100MB from memory --- the result should
> be instantaneous...
Wrong. If there is 2GB of data, 1900MB of which is dead tuples, those
pages would still have to be scanned for the count(*). The system does
not distinguish "pages which have no live tuples" from other pages, so
it has to load them all.
--
Alvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4
"[PostgreSQL] is a great group; in my opinion it is THE best open source
development communities in existence anywhere." (Lamar Owen)
From | Date | Subject | |
---|---|---|---|
Next Message | Carlos Moreno | 2007-09-23 21:55:49 | Re: Possible explanations for catastrophic performance deterioration? |
Previous Message | Carlos Moreno | 2007-09-23 20:33:30 | Re: Possible explanations for catastrophic performance deterioration? |