From: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> |
---|---|
To: | Alex Hochberger <alex(at)dsgi(dot)us> |
Cc: | Richard Huxton <dev(at)archonet(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Configuring a Large RAM PostgreSQL Server |
Date: | 2007-11-29 19:29:19 |
Message-ID: | 20071129192919.GJ9567@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> On Nov 29, 2007, at 2:15 PM, Richard Huxton wrote:
>> Alex Hochberger wrote:
>>> Problem Usage: we have a 20GB table with 120m rows that we are
>>> splitting into some sub-tables. Generally, we do large data pulls from
>>> here, 1 million - 4 million records at a time, stored in a new table for
>>> export. These queries are problematic because we are unable to index the
>>> database for the queries that we run because we get out of memory errors.
>>
>> Would it not make sense to find out why you are getting these errors
>> first?
Alex Hochberger wrote:
> It's not on rebuilding the index, it's on CREATE INDEX.
>
> I attribute it to wrong setting, Ubuntu bizarre-ness, and general problems.
Please do not top-post. I reformatted your message for clarity.
Richard is still correct: it is not normal to get out-of-memory errors
during index building, regardless of age of servers and Linux distro.
Perhaps you just have a maintenance_work_mem setting that's too large
for your server.
--
Alvaro Herrera http://www.advogato.org/person/alvherre
"Uno puede defenderse de los ataques; contra los elogios se esta indefenso"
From | Date | Subject | |
---|---|---|---|
Next Message | Matthew T. O'Connor | 2007-11-29 20:23:10 | Re: GiST indexing tuples |
Previous Message | Alex Hochberger | 2007-11-29 19:21:10 | Re: Configuring a Large RAM PostgreSQL Server |