From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | carl(dot)sopchak(at)cegis123(dot)com |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Newbie questions relating to transactions |
Date: | 2009-03-08 15:34:53 |
Message-ID: | 87prgs81uq.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Carl Sopchak <carl(dot)sopchak(at)cegis123(dot)com> writes:
> Well, the upgrade to 8.3 seemed to rid me of the command limit, but now I'm
> running out of memory. I have 2Gb physical and 8Gb swap (after adding 4Gb).
What do you mean you're running out of memory? For most part of Postgres
that's only a problem if you've configured it to use more memory than your
system can handle -- such as setting work_mem or shared_buffers too large.
One area that can cause problems is having too many trigger executions queued
up. I don't know if that's what you're running into though.
> Is there a way for me to run this outside of one huge transaction? This
> really shouldn't be using more than a few hundred megs of RAM (assuming
> cursor records are all stored in memory)...
Personally I find it much more flexible to implement these types of jobs as
external scripts connecting as a client. That lets you stop/start transactions
freely. It also allows you to open multiple connections or run the client-side
code on a separate machine which can have different resources available.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2009-03-08 15:49:50 | Re: Newbie questions relating to transactions |
Previous Message | Carl Sopchak | 2009-03-08 14:39:13 | Re: Newbie questions relating to transactions |