From: | "Gopal" <gopal(at)getmapping(dot)com> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Postgres scalability and performance on windows |
Date: | 2006-11-23 22:31:40 |
Message-ID: | A5DE6132B8D812419321747E42710E2204C05E@EXCHANGE01.gm.local |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi all,
I have a postgres installation thats running under 70-80% CPU usage
while
an MSSQL7 installation did 'roughly' the same thing with 1-2% CPU load.
Here's the scenario,
300 queries/second
Server: Postgres 8.1.4 on win2k server
CPU: Dual Xeon 3.6 Ghz,
Memory: 4GB RAM
Disks: 3 x 36gb , 15K RPM SCSI
C# based web application calling postgres functions using npgsql 0.7.
Its almost completely read-only db apart from fortnightly updates.
Table 1 - About 300,000 rows with simple rectangles
Table 2 - 1 million rows
Total size: 300MB
Functions : Simple coordinate reprojection and intersection query +
inner join of table1 and table2.
I think I have all the right indexes defined and indeed the performance
for queries under low loads is fast.
========================================================================
==========
postgresql.conf has following settings
max_connections = 150
hared_buffers = 20000 # min 16 or
max_connections*2, 8KB each
temp_buffers = 2000 # min 100, 8KB each
max_prepared_transactions = 25 # can be 0 or more
# note: increasing max_prepared_transactions costs ~600 bytes of shared
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
work_mem = 512 # min 64, size in KB
#maintenance_work_mem = 16384 # min 1024, size in
KB
max_stack_depth = 2048
effective_cache_size = 82728 # typically 8KB each
random_page_cost = 4 # units are one
sequential page fetch
========================================================================
==========
SQL server caches all the data in memory which is making it faster(uses
about 1.2GB memory- which is fine).
But postgres has everything spread across 10-15 processes, with each
process using about 10-30MB, not nearly enough to cache all the data and
ends up doing a lot of disk reads.
I've read that postgres depends on OS to cache the files, I wonder if
this is not happenning on windows.
In any case I cannot believe that having 15-20 processes running on
windows helps. Why not spwan of threads instead of processes, which
might
be far less expensive and more efficient. Is there any way of doing
this?
My question is, should I just accept the performance I am getting as the
limit on windows or should I be looking at some other params that I
might have missed?
Thanks,
Gopal
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas H. | 2006-11-23 22:40:55 | Re: Postgres scalability and performance on windows |
Previous Message | Richard P. Welty | 2006-11-23 22:17:07 | Re: pg_dump |