From: | Dawn Hollingsworth <dmh(at)airdefense(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-performance(at)postgresql(dot)org, Ben Scherrey <scherrey(at)proteus-tech(dot)com> |
Subject: | Re: Postgres Connections Requiring Large Amounts of Memory |
Date: | 2003-06-17 11:03:28 |
Message-ID: | 1055847810.2833.260.camel@kaos |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Each stored procedure only updates one row and inserts one row.
I just connected the user interface to the database. It only does
selects on startup. It's connection jumped to a memory usage of 256M.
It's not getting any larger but it's not getting any smaller either.
I'm going to compile postgres with the SHOW_MEMORY_STATS. I'm assuming I
can just set ShowStats equal to 1. I'll also pare down the application
to only use one of the stored procedures for less noise and maybe I can
track where memory might be going. And in the meantime I'll get a test
going with Postgres 7.3 to see if I get the same behavior.
Any other suggestions?
-Dawn
On Tue, 2003-06-17 at 22:03, Tom Lane wrote:
> The only theory I can come up with is that the deferred trigger list is
> getting out of hand. Since you have foreign keys in all the tables,
> each insert or update is going to add a trigger event to the list of
> stuff to check at commit. The event entries aren't real large but they
> could add up if you insert or update a lot of stuff in a single
> transaction. How many rows do you process per transaction?
>
> regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-06-17 19:38:02 | Re: Postgres Connections Requiring Large Amounts of Memory |
Previous Message | Shridhar Daithankar | 2003-06-17 10:25:22 | Re: Limiting Postgres memory usage |