From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Vincent Dautremont <vincent(at)searidgetech(dot)com> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: out of memory error |
Date: | 2012-05-22 18:30:03 |
Message-ID: | 5105.1337711403@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Vincent Dautremont <vincent(at)searidgetech(dot)com> writes:
> I think that i'm using the database for pretty basic stuffs.
> It's mostly used with stored procedures to update/ insert / select a row of
> each table.
> On 3 tables (less than 10 rows each), clients does updates/select at 2Hz to
> have pseudo real-time data up to date.
> I've got a total of 6 clients to the DB, they all access DB using stored
> procedures
> I would say that this is a light usage of the DB.
> Then I have rubyrep 1.2.0 running to sync a backup of the DB.
> it syncs 8 tables : 7 of them doesn't really change often and 1 is one of
> the pseudo real-time data one.
This is not much information. What I suspect is happening is that
you're using plpgsql functions (or some other PL) in such a way that the
system is leaking cached plans for the functions' queries; but there is
far from enough evidence here to prove or disprove that, let alone debug
the problem if that is a correct guess. An entirely blue-sky guess as
to what your code might be doing to trigger such a problem is if you
were constantly replacing the same function's definition via CREATE OR
REPLACE FUNCTION. But that could be totally wrong, too.
Can you put together a self-contained test case that triggers similar
growth in the server process size?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Vincent Dautremont | 2012-05-22 19:46:41 | Re: out of memory error |
Previous Message | Vincent Dautremont | 2012-05-22 18:04:39 | Re: out of memory error |