From: | Tory M Blue <tmblue(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Performance question 83 GB Table 150 million rows, distinct select |
Date: | 2011-11-16 22:53:17 |
Message-ID: | CAEaSS0b7D4eyDi4kt-Dt8u-yGq+akD=5PZwMk_TUHP-zujAnCQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Linux F12 64bit
Postgres 8.4.4
16 proc / 32GB
8 disk 15KRPM SAS/Raid 5 (I know!)
shared_buffers = 6000MB
#temp_buffers = 8MB
max_prepared_transactions = 0
work_mem = 250MB
maintenance_work_mem = 1000MB
We now have about 180mill records in that table. The database size is
about 580GB and the userstats table which is the biggest one and the
one we query the most is 83GB.
Just a basic query takes 4 minutes:
For e.g. select count(distinct uid) from userstats where log_date >'11/7/2011'
Since we are looking for distinct we can't obviously use an index. But
I'm wondering what should be expected and what is caused be tuning or
lack there of? Doing an iostat I see maybe 10-15%, however the cpu
that this query is attached to is obviously in the 99-100% busy arena.
Or am I really IOBound for this single query (sure lots of data
but?!).
It takes roughly 5.5 hours to do a concurrent re-index and this DB is
vac'd nightly.
Just not sure if this is what to expect, however there are many other
DB's out there bigger than ours, so I'm curious what can I do?
Thanks
Tory
avg-cpu: %user %nice %system %iowait %steal %idle
1.41 0.00 0.20 1.61 0.00 96.78
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 92.17 3343.06 1718.85 46273281004 23791660544
avg-cpu: %user %nice %system %iowait %steal %idle
1.47 0.00 0.61 5.85 0.00 92.07
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 417.50 90372.00 0.00 180744 0
avg-cpu: %user %nice %system %iowait %steal %idle
2.88 0.00 0.76 6.34 0.00 90.03
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 725.00 183560.00 148.00 367120 296
avg-cpu: %user %nice %system %iowait %steal %idle
2.18 0.00 0.60 3.59 0.00 93.63
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 711.00 179952.00 240.00 359904 480
[blue(at)adb01 ~]$ iostat -xd 2
Linux 2.6.32.26-175.fc12.x86_64 (adb01) 11/16/2011 _x86_64_ (16 CPU)
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm %util
sda 0.18 191.40 68.71 23.45 3343.22 1718.85
54.92 0.12 4.61 2.05 18.94
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm %util
sda 2.00 0.00 706.50 8.00 178832.00 128.00
250.47 77.76 31.21 1.40 100.00
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm %util
sda 4.98 17.41 584.58 35.32 148497.51 672.64
240.64 38.04 227.07 1.61 99.55
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm %util
sda 3.50 0.00 688.50 2.00 174556.00 32.00
252.84 2.81 4.66 1.44 99.30
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s
avgrq-sz avgqu-sz await svctm %util
sda 1.00 10.00 717.50 1.50 182084.00 92.00
253.37 2.43 3.37 1.38 99.45
^C
[blue(at)]$ iostat 2
Linux 2.6.32.26-175.fc12.x86_64 (adb01) 11/16/2011 _x86_64_ (16 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
1.41 0.00 0.20 1.61 0.00 96.78
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 92.17 3343.33 1718.85 46277115652 23791678248
avg-cpu: %user %nice %system %iowait %steal %idle
7.79 0.00 0.51 8.51 0.00 83.20
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 235.50 45168.00 0.00 90336 0
avg-cpu: %user %nice %system %iowait %steal %idle
5.90 0.00 0.35 4.46 0.00 89.29
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 160.00 14688.00 132.00 29376 264
avg-cpu: %user %nice %system %iowait %steal %idle
8.01 0.00 0.51 12.80 0.00 78.67
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 163.50 11324.00 700.00 22648 1400
From | Date | Subject | |
---|---|---|---|
Next Message | Alan Hodgson | 2011-11-16 23:27:57 | Re: Performance question 83 GB Table 150 million rows, distinct select |
Previous Message | Tomas Vondra | 2011-11-16 18:16:43 | Re: Slow queries / commits, mis-configuration or hardware issues? |