From: | Iñigo Martinez Lasala <imartinez(at)vectorsf(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Log full with gigabyes of CurTransactionContex |
Date: | 2009-06-15 15:12:48 |
Message-ID: | 1245078768.16064.45.camel@coyote |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Thank you very much, Tom.
We have increased shared buffers to 2.5GB (linux kernels allows us reach
this level) and lowered work_mem to 500MB.
Let's see tonight. :-)
-----Original Message-----
From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Iñigo Martinez Lasala <imartinez(at)vectorsf(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: [ADMIN] Log full with gigabyes of CurTransactionContex
Date: Mon, 15 Jun 2009 10:02:26 -0400
=?ISO-8859-1?Q?I=F1igo?= Martinez Lasala <imartinez(at)vectorsf(dot)com> writes:
> We have a problem with an insert query in one of our clients. This query
> is launched in a night batch process. We have observed that if change
> set is big (more than 50.000 updates) our database log starts growing
> with thousands and thousands of lines until the server is out of space
> and database freezes.
You're running out of memory.
> work_mem = 2000MB
> maintenance_work_mem = 1024M
These two settings are probably the cause. With shared_buffers at 2GB,
you do not have anywhere near 1GB to play around with in a 32-bit
environment. Try something like 200M and 500M.
> Increasing temp buffers could help?
I can hardly think of anything more counterproductive. You don't
have enough address space.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Gurjeet Singh | 2009-06-15 15:18:41 | Re: Log full with gigabyes of CurTransactionContex |
Previous Message | Gurjeet Singh | 2009-06-15 15:06:51 | Re: Silent installation and port configuration with initdb=0 |