From: | SZŰCS Gábor <surrano(at)mailbox(dot)hu> |
---|---|
To: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Postgres Connections Requiring Large Amounts of Memory |
Date: | 2003-06-18 07:11:25 |
Message-ID: | 004a01c33568$d48235c0$0403a8c0@fejleszt4 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
----- Original Message -----
From: "Dawn Hollingsworth" <dmh(at)airdefense(dot)net>
Sent: Tuesday, June 17, 2003 11:42 AM
> I'm not starting any of my own transactions and I'm not calling stored
> procedures from withing stored procedures. The stored procedures do have
> large parameters lists, up to 100. The tables are from 300 to 500
Geez! I don't think it'll help you find the memory leak (if any), but
couldn't you normalize the tables to smaller ones? That may be a pain when
updating (views and rules), but I think it'd worth in resources (time and
memory, but maybe not disk space). I wonder what is the maximum number of
updated cols and the minimum correlation between their semantics in a
single transaction (i.e. one func call), since there are "only" 100 params
for a proc.
> columns. 90% of the columns are either INT4 or INT8. Some of these
> tables are inherited. Could that be causing problems?
Huh. It's still 30-50 columns (a size of a fairly large table for me) of
other types :)
G.
------------------------------- cut here -------------------------------
From | Date | Subject | |
---|---|---|---|
Next Message | Bruno Wolff III | 2003-06-18 15:02:10 | Recent 7.4 change slowed down a query by a factor of 3 |
Previous Message | Tom Lane | 2003-06-18 00:12:18 | Re: [PERFORM] Interesting incosistent query timing |