From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | John Cole <john(dot)cole(at)uai(dot)com> |
Cc: | "'pgsql-general(at)postgresql(dot)org'" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Out of memory on vacuum analyze |
Date: | 2007-02-19 19:19:41 |
Message-ID: | 1171912781.10824.192.camel@dogma.v10.wvs |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 2007-02-19 at 12:47 -0600, John Cole wrote:
> I have a large table (~55 million rows) and I'm trying to create an index
> and vacuum analyze it. The index has now been created, but the vacuum
> analyze is failing with the following error:
>
> ERROR: out of memory
> DETAIL: Failed on request of size 943718400.
>
> I've played with several settings, but I'm not sure what I need to set to
> get this to operate. I'm running on a dual Quad core system with 4GB of
> memory and Postgresql 8.2.3 on W2K3 Server R2 32bit.
>
> Maintenance_work_mem is 900MB
> Max_stack_depth is 3MB
> Shared_buffers is 900MB
> Temp_buffers is 32MB
> Work_mem is 16MB
> Max_fsm_pages is 204800
> Max_connections is 50
>
You told PostgreSQL that you have 900MB available for
maintenance_work_mem, but your OS is denying the request. Try *lowering*
that setting to something that your OS will allow. That seems like an
awfully high setting to me.
Regards,
Jeff Davis
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Sullivan | 2007-02-19 19:40:56 | Re: Database performance comparison paper. |
Previous Message | Jaime Casanova | 2007-02-19 19:18:55 | Re: QNX, RTOS y Postgres OT |