From: | Michael Nolan <htfoot(at)gmail(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | New/Revised TODO? Gathering actual read performance data for use by planner |
Date: | 2011-05-24 20:34:23 |
Message-ID: | BANLkTi=tNr6EBAObv_t-KLTwREZWKAhTYw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
In the TODO list is this item:
*Modify the planner to better estimate caching effects
*
Tom mentioned this in his presentation at PGCON, and I also chatted with Tom
about it briefly afterwards.
Based on last year's discussion of this TODO item, it seems thoughts have
been focused on estimating how much data is
being satisfied from PG's shared buffers. However, I think that's only part
of the problem.
Specifically, read performance is going to be affected by:
1. Reads fulfilled from shared buffers.
2. Reads fulfilled from system cache.
3. Reads fulfilled from disk controller cache.
4. Reads from physical media.
#4 is further complicated by the type of physical media for that specific
block. For example, reads that can
be fulfilled from a SSD are going to be much faster than ones that access
hard drives (or even slower types of media.)
System load is going to impact all of these as well.
Therefore, I suggest that an alternative to the above TODO may be to gather
performance data without knowing
(or more importantly without needing to know) which of the above sources
fulfilled the read.
This data would probably need to be kept separately for each table or index,
as some tables or indexes
may be mostly or fully in cache or on faster physical media than others,
although in the absence of other
data about a specific table or index, data about other relations in the same
tablespace might be of some use.
Tom mentioned that the cost of doing multiple system time-of-day calls for
each block read might be
prohibitive, it may also be that the data may also be too coarse on some
systems to be truly useful
(eg, the epoch time in seconds.)
If this data were available, that could mean that successive plans for the
same query could have
significantly different plans (and thus actual performance), based on what
has happened recently,
so these statistics would have to be relatively short term and updated
frequently, but without becoming
computational bottlenecks.
The problem is one I'm interested in working on.
--
Mike Nolan
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2011-05-24 20:34:29 | tackling full page writes |
Previous Message | Bruce Momjian | 2011-05-24 20:01:35 | Re: Adding an example for replication configuration to pg_hba.conf |