From: | Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Slow count(*) again... |
Date: | 2010-10-13 08:50:23 |
Message-ID: | 4CB572CF.4080709@catalyst.net.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
On 13/10/10 21:38, Neil Whelchel wrote:
>
> So with our conclusion pile so far we can deduce that if we were to keep all
> of our data in two column tables (one to link them together, and the other to
> store one column of data), we stand a much better chance of making the entire
> table to be counted fit in RAM, so we simply apply the WHERE clause to a
> specific table as opposed to a column within a wider table... This seems to
> defeat the entire goal of the relational database...
>
>
That is a bit excessive I think - a more reasonable conclusion to draw
is that tables bigger than ram will drop to IO max speed to scan, rather
than DIMM max speed...
There are things you can do to radically improve IO throughput - e.g a
pair of AMC or ARECA 12 slot RAID cards setup RAID 10 and tuned properly
should give you a max sequential throughput of something like 12*100
MB/s = 1.2 GB/s. So your example table (estimated at 2GB) so be able to
be counted by Postgres in about 3-4 seconds...
This assumes a more capable machine than you are testing on I suspect.
Cheers
Mark
From | Date | Subject | |
---|---|---|---|
Next Message | Fujii Masao | 2010-10-13 09:04:44 | Re: Issues with Quorum Commit |
Previous Message | Mladen Gogala | 2010-10-13 08:44:09 | Re: Slow count(*) again... |
From | Date | Subject | |
---|---|---|---|
Next Message | Neil Whelchel | 2010-10-13 10:16:11 | Re: Slow count(*) again... |
Previous Message | Mladen Gogala | 2010-10-13 08:44:09 | Re: Slow count(*) again... |