From: | Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Slow count(*) again... |
Date: | 2010-10-13 21:48:21 |
Message-ID: | 4CB62925.2070401@catalyst.net.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
On 13/10/10 21:44, Mladen Gogala wrote:
> On 10/13/2010 3:19 AM, Mark Kirkwood wrote:
>> I think that major effect you are seeing here is that the UPDATE has
>> made the table twice as big on disk (even after VACUUM etc), and it has
>> gone from fitting in ram to not fitting in ram - so cannot be
>> effectively cached anymore.
>>
> In the real world, tables are larger than the available memory. I have
> tables of several hundred gigabytes in size. Tables shouldn't be
> "effectively cached", the next step would be to measure "buffer cache
> hit ratio", tables should be effectively used.
>
Sorry Mladen,
I didn't mean to suggest that all tables should fit into ram... but was
pointing out (one reason) why Neil would expect to see a different
sequential scan speed after the UPDATE.
I agree that in many interesting cases, tables are bigger than ram [1].
Cheers
Mark
[1] Having said that, these days 64GB of ram is not unusual for a
server... and we have many real customer databases smaller than this
where I work.
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2010-10-13 22:17:45 | Re: WIP: extensible enums |
Previous Message | Robert Haas | 2010-10-13 21:45:03 | Re: [HACKERS] Docs for archive_cleanup_command are poor |
From | Date | Subject | |
---|---|---|---|
Next Message | Brandon Casci | 2010-10-13 22:06:44 | help with understanding EXPLAIN and boosting performance |
Previous Message | Joe Miller | 2010-10-13 21:20:11 | Re: Auto ANALYZE criteria |