From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Marko Kreen <markokr(at)gmail(dot)com>, Jeff Amiel <becauseimjeff(at)yahoo(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Out of Memory - 8.2.4 |
Date: | 2007-08-28 23:41:27 |
Message-ID: | 15555.1188344487@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> Marko Kreen escribi:
>> I've experienced something similar. The reason turned out to be
>> combination of overcommit=off, big maint_mem and several parallel
>> vacuums for fast-changing tables. Seems like VACUUM allocates
>> full maint_mem before start, whatever the actual size of the table.
> Hmm. Maybe we should have VACUUM estimate how much is the maximum
> amount of memory that would be used, given the size of the table, and
> allocate only that much.
Yeah --- given the likelihood of parallel vacuum activity in 8.3,
it'd be good to not expend memory we certainly aren't going to need.
We could set a hard limit at RelationGetNumberOfBlocks *
MaxHeapTuplesPerPage TIDs, but that is *extremely* conservative
(it'd work out to allocating about a quarter of the table's actual size
in bytes, if I did the math right).
Given that the worst-case consequence is extra index vacuum passes,
which don't hurt that much when a table is small, maybe some smaller
estimate like 100 TIDs per page would be enough. Or, instead of
using a hard-wired constant, look at pg_class.reltuples/relpages
to estimate the average tuple density ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Decibel! | 2007-08-29 00:03:49 | Re: Seeking datacenter PITR backup suggestions |
Previous Message | Andrej Ricnik-Bay | 2007-08-28 23:33:06 | Re: Install on 32 or 64 bit Linux? |