From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Peter Geoghegan <pg(at)heroku(dot)com>, KONDO Mitsumasa <kondo(dot)mitsumasa(at)lab(dot)ntt(dot)co(dot)jp>, Rajeev rastogi <rajeev(dot)rastogi(at)huawei(dot)com>, Mitsumasa KONDO <kondo(dot)mitsumasa(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Add min and max execute statement time in pg_stat_statement |
Date: | 2014-01-30 17:42:06 |
Message-ID: | 2412.1391103726@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> One could test it with each pgbench thread starting at a random point
> in the same sequence and wrapping at the end.
Well, the real point is that 10000 distinct statements all occurring with
exactly the same frequency isn't a realistic scenario: any hashtable size
less than 10000 necessarily sucks, any size >= 10000 is perfect.
I'd think that what you want to test is a long-tailed frequency
distribution (probably a 1/N type of law) where a small number of
statements account for most of the hits and there are progressively fewer
uses of less common statements. What would then be interesting is how
does the performance change as the hashtable size is varied to cover more
or less of that distribution.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2014-01-30 17:45:26 | Re: jsonb and nested hstore |
Previous Message | Merlin Moncure | 2014-01-30 17:34:40 | Re: jsonb and nested hstore |