From: | "Stefano Nichele" <stefano(dot)nichele(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: understanding postgres issues/bottlenecks |
Date: | 2009-01-11 15:00:10 |
Message-ID: | bdb13b210901110700s42243552ibfed8d28405c5e12@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi All,
I ran pgbench. Here some result:
-bash-3.1$ pgbench -c 50 -t 1000
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
number of clients: 50
number of transactions per client: 1000
number of transactions actually processed: 50000/50000
tps = 377.351354 (including connections establishing)
tps = 377.788377 (excluding connections establishing)
Some vmstat samplings in the meantime:
procs -----------memory---------- ---swap-- -----io---- --system--
-----cpu------
r b swpd free buff cache si so bi bo in cs us sy id
wa st
0 4 92 127880 8252 3294512 0 0 458 12399 2441 14903 22 9 34
35 0
11 49 92 125336 8288 3297016 0 0 392 11071 2390 11568 17 7 51
24 0
0 2 92 124548 8304 3297764 0 0 126 8249 2291 3829 5 3 64
28 0
0 1 92 127268 7796 3295672 0 0 493 11387 2323 14221 23 9 47
21 0
0 2 92 127256 7848 3295492 0 0 501 10654 2215 14599 24 9 42
24 0
0 2 92 125772 7892 3295656 0 0 34 7541 2311 327 0 1 59
40 0
0 1 92 127188 7952 3294084 0 0 537 11039 2274 15460 23 10 43
24 0
7 4 92 123816 7996 3298620 0 0 253 8946 2284 7310 11 5 52
32 0
0 2 92 126652 8536 3294220 0 0 440 9563 2307 9036 13 6 56
25 0
0 10 92 125268 8584 3296116 0 0 426 10696 2285 11034 20 9 39
32 0
0 2 92 124168 8604 3297252 0 0 104 8385 2319 4162 3 3 40
54 0
0 8 92 123780 8648 3296456 0 0 542 11498 2298 16613 25 10 16
48 0
-bash-3.1$ pgbench -t 10000 -c 50
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
number of clients: 50
number of transactions per client: 10000
number of transactions actually processed: 500000/500000
tps = 8571.573651 (including connections establishing)
tps = 8594.357138 (excluding connections establishing)
-bash-3.1$ pgbench -t 10000 -c 50 -S
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
number of clients: 50
number of transactions per client: 10000
number of transactions actually processed: 500000/500000
tps = 8571.573651 (including connections establishing)
tps = 8594.357138 (excluding connections establishing)
(next test is with scaling factor 1)
-bash-3.1$ pgbench -t 20000 -c 8 -S pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 1
number of clients: 8
number of transactions per client: 20000
number of transactions actually processed: 160000/160000
tps = 11695.895318 (including connections establishing)
tps = 11715.603720 (excluding connections establishing)
Any comment ?
I can give you also some details about database usage of my application:
- number of active connections: about 60
- number of idle connections: about 60
Here some number from a mine old pgfouine report:
- query peak: 378 queries/s
- select: 53,1%, insert 3,8%, update 2,2 %, delete 2,8 %
The application is basically a web application and the db size is 37 GB.
Is there a way to have the number of queries per second and the percentages
of select/update/insert/delete without pgfouine ?
What is the performance impact of stats_start_collector = on and
stats_row_level = on (they are on since I use autovacuum)
Thanks a lot for your help.
ste
On Wed, Jan 7, 2009 at 8:05 PM, Stefano Nichele
<stefano(dot)nichele(at)gmail(dot)com>wrote:
> Ok, here some information:
>
> OS: Centos 5.x (Linux 2.6.18-53.1.21.el5 #1 SMP Tue May 20 09:34:18 EDT
> 2008 i686 i686 i386 GNU/Linux)
> RAID: it's a hardware RAID controller
> The disks are 9600rpm SATA drives
>
> (6 disk 1+0 RAID array and 2 separate disks for the OS).
>
>
> About iostat (on sdb I have pg_xlog, on sdc I have data)
>
> > iostat -k
> Linux 2.6.18-53.1.21.el5 (*******) 01/07/2009
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 17.27 0.00 5.13 45.08 0.00 32.52
>
> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
> sda 30.42 38.50 170.48 182600516 808546589
> sdb 46.16 0.23 52.10 1096693 247075617
> sdc 269.26 351.51 451.00 1667112043 2138954833
>
>
>
> > iostat -x -k -d 2 5
> Linux 2.6.18-53.1.21.el5 (*******) 01/07/2009
>
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
> avgqu-sz await svctm %util
> sda 0.17 12.68 0.47 29.95 38.51 170.51 13.74
> 0.03 0.86 0.19 0.57
> sdb 0.01 80.11 0.05 46.11 0.23 52.01 2.26
> 0.01 0.22 0.22 1.01
> sdc 7.50 64.57 222.55 46.69 350.91 450.98 5.96
> 0.57 2.05 3.13 84.41
>
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
> avgqu-sz await svctm %util
> sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00 0.00 0.00 0.00
> sdb 0.00 196.00 1.00 117.00 4.00 1252.00 21.29
> 0.02 0.19 0.19 2.30
> sdc 1.50 66.00 277.00 66.50 3100.00 832.00 22.89
> 50.84 242.30 2.91 100.10
>
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
> avgqu-sz await svctm %util
> sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00 0.00 0.00 0.00
> sdb 0.00 264.50 0.00 176.50 0.00 1764.00 19.99
> 0.04 0.21 0.21 3.70
> sdc 3.50 108.50 291.50 76.00 3228.00 752.00 21.66
> 89.42 239.39 2.72 100.05
>
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
> avgqu-sz await svctm %util
> sda 0.00 4.98 0.00 1.00 0.00 23.88 48.00
> 0.00 0.00 0.00 0.00
> sdb 0.00 23.88 0.00 9.45 0.00 133.33 28.21
> 0.00 0.21 0.21 0.20
> sdc 1.00 105.97 274.13 53.73 3297.51 612.94 23.85
> 67.99 184.58 3.04 99.55
>
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
> avgqu-sz await svctm %util
> sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00
> 0.00 0.00 0.00 0.00
> sdb 0.00 79.00 0.00 46.00 0.00 500.00 21.74
> 0.01 0.25 0.25 1.15
> sdc 2.50 141.00 294.00 43.50 3482.00 528.00 23.76
> 51.33 170.46 2.96 100.05
>
>
> vmstat in the same time:
> > vmstat 2
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu------
> r b swpd free buff cache si so bi bo in cs us sy id
> wa st
> 0 27 80 126380 27304 3253016 0 0 98 55 0 1 17 5 33
> 45 0
> 0 26 80 124516 27300 3255456 0 0 3438 1724 2745 4011 11 2 8
> 78 0
> 1 25 80 124148 27276 3252548 0 0 3262 2806 3572 7007 33 11 3
> 53 0
> 1 28 80 128272 27244 3248516 0 0 2816 1006 2926 5624 12 3 12
> 73 0
>
>
> I will run pgbench in the next days.
>
>
>
>> Aside from all the advice here about system tuning, as a system admin I'd
>> also ask is the box doing the job you need? And are you looking at the
>> Postgres log (with logging of slow queries) to see that queries perform in a
>> sensible time? I'd assume with the current performance figure there is an
>> issue somewhere, but I've been to places where it was as simple as adding
>> one index, or even modifying an index so it does what the application
>> developer intended instead of what they ask for ;)
>>
>>
>
> I already checked postgres log and resolved index/slow queries issues.
> Actually I have queries that sometime are really fast, and sometime go in
> timeout.
> But all the required indexes are there. For sure, there are space to
> improve performances also in that way, but I would like also to investigate
> issue from other point of views (in order to understand also how to monitor
> the server).
>
>
> Cheers and thanks a lot.
> ste
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
From | Date | Subject | |
---|---|---|---|
Next Message | Stefano Nichele | 2009-01-11 15:09:58 | Re: understanding postgres issues/bottlenecks |
Previous Message | Markus Wanner | 2009-01-11 10:40:02 | Re: block device benchmarking |