From: | William Yu <wyu(at)talisys(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: 15,000 tables - next step |
Date: | 2005-12-04 09:21:42 |
Message-ID: | dmucf7$20ff$1@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Michael Riess wrote:
>> Well, I'd think that's were your problem is. Not only you have a
>> (relatively speaking) small server -- you also share it with other
>> very-memory-hungry services! That's not a situation I'd like to be in.
>> Try putting Apache and Tomcat elsewhere, and leave the bulk of the 1GB
>> to Postgres.
>
>
> No can do. I can try to switch to a 2GB machine, but I will not use
> several machines. Not for a 5GB database. ;-)
>
>> With 1500 shared buffers you are not really going
>> anywhere -- you should have ten times that at the very least.
>>
>
> Like I said - I tried to double the buffers and the performance did not
> improve in the least. And I also tried this on a 2GB machine, and
> swapping was not a problem. If I used 10x more buffers, I would in
> essence remove the OS buffers.
Increasing buffers do improve performance -- if you have enough memory.
You just don't have enough memory to play with. My servers run w/ 10K
buffers (128MB on 64-bit FC4) and it definitely runs better w/ it at 10K
versus 1500.
With that many tables, your system catalogs are probably huge. To keep
your system catalog from continually cycling in-out of buffers/OS
cache/disk, you need a lot more memory. Ordinarily, I'd say the 500MB
you have available for Postgres to cache 5GB is a workable ratio. My
servers all have similar ratios of ~1:10 and they perform pretty good --
*except* when the system catalogs bloated due to lack of vacuuming on
system tables. My app regularly creates & drops thousands of temporary
tables leaving a lot of dead rows in the system catalogs. (Nearly the
same situation as you -- instead of 15K live tables, I had 200 live
tables and tens of thousands of dead table records.) Even with almost
8GB of RAM dedicated to postgres, performance on every single query --
not matter how small the table was -- took forever because the query
planner had to spend a significant period of time scanning through my
huge system catalogs to build the execution plan.
While my situtation was fixable by scheduling a nightly vacuum/analyze
on the system catalogs to get rid of the bazillion dead table/index
info, you have no choice but to get more memory so you can stuff your
entire system catalog into buffers/os cache. Personally, w/ 1GB of ECC
RAM at ~$85, it's a no brainer. Get as much memory as your server can
support.
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Riess | 2005-12-04 09:33:47 | Re: 15,000 tables - next step |
Previous Message | Thomas F. O'Connell | 2005-12-04 06:40:01 | Re: Very slow queries - please help |