From: | "Marko Kreen" <markokr(at)gmail(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Alvaro Herrera" <alvherre(at)commandprompt(dot)com>, "Jeff Amiel" <becauseimjeff(at)yahoo(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Out of Memory - 8.2.4 |
Date: | 2007-08-30 08:01:19 |
Message-ID: | e51f66da0708300101s448ee88bw6ca884615b8a3e8e@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 8/29/07, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> > I'm not having much luck really. I think the problem is that ANALYZE
> > stores reltuples as the number of live tuples, so if you delete a big
> > portion of a big table, then ANALYZE and then VACUUM, there's a huge
> > misestimation and extra index cleanup passes happen, which is a bad
> > thing.
>
> Yeah ... so just go with a constant estimate of say 200 deletable tuples
> per page?
Note that it's much better to err on the smaller values.
Extra index pass is really no problem. VACUUM getting
"Out of memory" may not sound like a big problem, but the scary
thing is - the last VACUUM's memory request may succeed and that
means following queries start failing and that is big problem.
--
marko
From | Date | Subject | |
---|---|---|---|
Next Message | Magnus Hagander | 2007-08-30 08:02:40 | Re: PostgreSQL.Org (was: PostgreSQL Conference Fall 2007) |
Previous Message | Nitin Verma | 2007-08-30 07:54:45 | Re: What kind of locks does vacuum process hold on the db? |