From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Aaron Turner <synfinatic(at)gmail(dot)com> |
Cc: | pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: large dataset with write vs read clients |
Date: | 2010-10-07 19:00:06 |
Message-ID: | 20101007190006.GA26232@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
* Aaron Turner (synfinatic(at)gmail(dot)com) wrote:
> The graphing front end CGI is all SELECT. There's 12k tables today,
> and new tables are created each month.
That's a heck of alot of tables.. Probably more than you really need.
Not sure if reducing that number would help query times though.
> The number of rows per table
> is 100-700k, with most in the 600-700K range. 190GB of data so far.
> Good news is that queries have no joins and are limited to only a few
> tables at a time.
Have you got indexes and whatnot on these tables?
> Basically, each connection is taking about 100MB resident. As we need
> to increase the number of threads to be able to query all the devices
> in the 5 minute window, we're running out of memory. There aren't
> that many CGI connections at anyone one time, but obviously query
> performance isn't great, but honestly is surprisingly good all things
> considered.
I'm kind of suprised at each connection taking 100MB, especially ones
which are just doing simple inserts.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2010-10-07 19:02:08 | Re: large dataset with write vs read clients |
Previous Message | Stephen Frost | 2010-10-07 18:57:48 | Re: large dataset with write vs read clients |