From: | "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> |
---|---|
To: | "Heikki Linnakangas" <heikki(dot)linnakangas(at)enterprisedb(dot)com> |
Cc: | "Dan Ports" <drkp(at)csail(dot)mit(dot)edu>, "john(dot)okite(at)gmail(dot)org" <john(dot)okite(at)gmail(dot)org>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, <anssi(dot)kaariainen(at)thl(dot)fi> |
Subject: | Re: SSI patch version 8 |
Date: | 2011-01-13 15:02:12 |
Message-ID: | 4D2EBF950200002500039483@gw.wicourts.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
> where exactly is the extra overhead coming from?
Keep in mind that this is a sort of worst case scenario. The data
is fully cached in shared memory and we're doing a sequential pass
just counting the rows. In an earlier benchmark (which I should
re-do after all this refactoring), random access queries against a
fully cached data set only increased run time by 1.8%. Throw some
disk access into the mix, and the overhead is likely to get lost in
the noise.
But, as I said, count(*) seems to be the first thing many people try
as a benchmark, and this is a symptom of a more general issue, so
I'd like to find a good solution.
-Kevin
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2011-01-13 15:02:21 | Re: SSI patch version 8 |
Previous Message | Kevin Grittner | 2011-01-13 14:51:48 | Re: SSI patch version 8 |