From: | Dave Crooke <dcrooke(at)gmail(dot)com> |
---|---|
To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
Cc: | Syed Asif Tanveer <asif(dot)tanveer(at)analytics(dot)com(dot)pk>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Postgres configuration for 8 CPUs, 6 GB RAM |
Date: | 2012-11-28 00:29:07 |
Message-ID: | CALi4UpiU+ioU3x+EVCfPVdhWEFiMkMz91JQRsxZ+5C348deo4A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Asif:
1. 6GB is pretty small .... once you work through the issues, adding RAM
will probably be a good investment, depending on your time-working set
curve.
A quick rule of thumb is this:
- if your cache hit ratio is significantly larger than (cache size / db
size) then there is locality of reference among queries, and if the hit
ratio is less than high 90's percent, then there is a high probablility
that adding incremental RAM for caching by the OS and/or PG itself will
make things significantly better; this applies to both database-wide
averages and individual slow query types.
- Look for long-running queries spilling merges and sorts to disk; if these
are a concern then adding RAM and leaving it out of the buffer cache but
setting larger work_mem sizes will improve their performance
2. You also need to consider how many queries are running concurrently;
limiting the number of concurrent executions to a strict number e.g. by
placing the database behind a connection pooler. By avoiding contention for
disk head seeking
3. If I/O is a real bottleneck, especially random access, you might
consider more drives
4. If the data access is truly all over the place, or you have lots of
queries which touch large chucnks of the data, then depending on your
budget, a cheap high RAM machine built from a desktop motherboard which
will allow you have e.g. 128GB of RAM in low cost modules and thus have the
entire DB in RAM is definitely worth considering as a replica server on
which to offload some queries. I priced this out at around US$2000 here in
America using high quality parts.
These performance tweaks are all of course interrelated ... e.g. if the
access patterns are amenable to caching, then adding RAM will reduce I/O
load without any further changes, and item 3. may cease to be a problem.
Be careful of the bottleneck issue ... if you're a long way from the
performance you need, then fixing one issue will expose another etc. until
every part of the system is quick enough to keep up.
Don't forget that your time is worth money too, and throwing more hardware
at it is one of many viable strategies.
Cheers
Dave
On Tue, Nov 27, 2012 at 1:53 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>wrote:
> On Tue, Nov 27, 2012 at 12:47 AM, Syed Asif Tanveer
> <asif(dot)tanveer(at)analytics(dot)com(dot)pk> wrote:
> > Hi,
> >
> >
> >
> > I am using PostgreSQL 9.1.5 for Data warehousing and OLAP puposes. Data
> size
> > is around 100 GB and I have tuned my PostgreSQL accordingly still I am
> > facing performance issues. The query performance is too low despite
> tables
> > being properly indexed and are vacuumed and analyzed at regular basis.
> CPU
> > usage never exceeded 15% even at peak usage times. Kindly guide me
> through
> > if there are any mistakes in setting configuration parameters. Below are
> my
> > system specs and please find attached my postgresql configuration
> parameters
> > for current system.
> >
>
> I notice that you've got autovac nap time of 60 minutes, so it's
> possible you've managed to bloat your tables a fair bit. What do you
> get running the queries from this page:
>
> http://wiki.postgresql.org/wiki/Show_database_bloat
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
From | Date | Subject | |
---|---|---|---|
Next Message | Mike Blackwell | 2012-11-28 01:08:25 | Re: Savepoints in transactions for speed? |
Previous Message | Bob Lunney | 2012-11-28 00:16:23 | Re: Savepoints in transactions for speed? |