From: | "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> |
---|---|
To: | <jnasby(at)pervasive(dot)com> |
Cc: | <josh(at)agliodbs(dot)com>, <pgsql-hackers(at)postgresql(dot)org>, <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Subject: | Re: A costing analysis tool |
Date: | 2005-10-18 22:05:24 |
Message-ID: | 43552B540200002500000103@gwmta.wicourts.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Thanks to all who have been offering suggestions. I have been
reading them and will try to incorporate as much as possible.
I have already reworked that little brain-dead python script into
something which uses a regular expression to pick off all of the
data from each cost/timing line (including the first one), and
tracks the heirarchy. I'll put all of these into the analysis
database.
Due to client operational problems I've been called in on, I
haven't done much more than that so far. I'll try to firm up a
proposed schema for the data soon, and some idea of what a
test case definition will look like. Then I'll probably have to
set it aside for two or three weeks.
I'll attach the current plan scanner for review, comment,
improvement. Also, someone may want to look at the results
from a few queries to get ideas going on how they want to use
the data.
Regarding the idea of a site where results could be posted
and loaded into a database which would be available for
public access -- I agree that would be great; however, my
client is not willing to take that on. If anyone wants to
volunteer, that wuold be fantastic.
-Kevin
>>> "Jim C. Nasby" <jnasby(at)pervasive(dot)com> >>>
On Fri, Oct 14, 2005 at 03:34:43PM -0500, Kevin Grittner wrote:
> of the two times as a reliability factor. Unfortunately, that
> means doubling the number of cache flushes, which is likely
> to be the most time-consuming part of running the tests. On
> the bright side, we would capture the top level runtimes you
> want.
Actually, if you shut down the database and run this bit of code with a
high enough number you should have a nicely cleaned cache.
int main(int argc, char *argv[]) {
if (!calloc(atoi(argv[1]), 1024*1024)) { printf("Error allocating
memory.\n"); }
}
Running that on a dual Opteron (842's, I think) gives:
decibel(at)fritz(dot)1[16:35]~:10>time ./a.out 3300
3.142u 8.940s 0:40.62 29.7% 5+4302498k 0+0io 2pf+0w
That was on http://stats.distributed.net and resulted in about 100MB
being paged to disk. With 3000 it only took 20 seconds, but might not
have cleared 100% of memory.
Attachment | Content-Type | Size |
---|---|---|
parse_plan.py | application/octet-stream | 2.1 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Martijn van Oosterhout | 2005-10-18 22:11:45 | Re: 2nd try @NetBSD/2.0 Alpha |
Previous Message | Larry Rosenman | 2005-10-18 22:03:35 | Re: 2nd try @NetBSD/2.0 Alpha |