From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
Cc: | Tomas Vondra <tv(at)fuzzy(dot)cz>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Performance |
Date: | 2011-04-27 20:27:48 |
Message-ID: | BANLkTikASFwAcMwn+G=PEo9hsv-j9VCCNg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Apr 26, 2011 at 9:49 AM, Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
> On Tue, Apr 26, 2011 at 7:30 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> On Apr 14, 2011, at 2:49 AM, Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
>>> This particular factor is not about an abstract and opaque "Workload"
>>> the server can't know about. It's about cache hit rate, and the server
>>> can indeed measure that.
>>
>> The server can and does measure hit rates for the PG buffer pool, but to my knowledge there is no clear-cut way for PG to know whether read() is satisfied from the OS cache or a drive cache or the platter.
>
> Isn't latency an indicator?
>
> If you plot latencies, you should see three markedly obvious clusters:
> OS cache (microseconds), Drive cache (slightly slower), platter
> (tail).
What if the user is using an SSD or ramdisk?
Admittedly, in many cases, we could probably get somewhat useful
numbers this way. But I think it would be pretty expensive.
gettimeofday() is one of the reasons why running EXPLAIN ANALYZE on a
query is significantly slower than just running it normally. I bet if
we put such calls around every read() and write(), it would cause a
BIG slowdown for workloads that don't fit in shared_buffers.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2011-04-27 20:32:48 | Re: index usage on queries on inherited tables |
Previous Message | Robert Haas | 2011-04-27 20:22:51 | Re: Performance |