From: | david(at)lang(dot)hm |
---|---|
To: | Carlos Moreno <moreno_pg(at)mochima(dot)com> |
Cc: | PostgreSQL Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Feature Request --- was: PostgreSQL Performance Tuning |
Date: | 2007-05-04 01:52:47 |
Message-ID: | Pine.LNX.4.64.0705031849250.6380@asgard.lang.hm |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-performance |
On Thu, 3 May 2007, Carlos Moreno wrote:
>
>> > error like this or even a hundred times this!! Most of the time
>> > you wouldn't, and definitely if the user is careful it would not
>> > happen --- but it *could* happen!!! (and when I say could, I
>> > really mean: trust me, I have actually seen it happen)
>> Part of my claim is that measuring real-time you could get an
>>
>> if you have errors of several orders of magnatude in the number of loops
>> it can run in a given time period then you don't have something that you
>> can measure to any accuracy (and it wouldn't matter anyway, if your loops
>> are that variable, your code execution would be as well)
>
> Not necessarily --- operating conditions may change drastically from
> one second to the next; that does not mean that your system is useless;
> simply that the measuring mechanism is way too vulnerable to the
> particular operating conditions at the exact moment it was executed.
>
> I'm not sure if that was intentional, but you bring up an interesting
> issue --- or in any case, your comment made me drastically re-think
> my whole argument: do we *want* to measure the exact speed, or
> rather the effective speed under normal operating conditions on the
> target machine?
>
> I know the latter is almost impossible --- we're talking about an estimate
> of a random process' parameter (and we need to do it in a short period
> of time) ... But the argument goes more or less like this: if you have a
> machine that runs at 1000 MIPS, but it's usually busy running things
> that in average consume 500 of those 1000 MIPS, would we want PG's
> configuration file to be obtained based on 1000 or based on 500 MIPS???
> After all, the CPU is, as far as PostgreSQL will be able see, 500 MIPS
> fast, *not* 1000.
>
> I think I better stop, if we want to have any hope that the PG team will
> ever actually implement this feature (or similar) ... We're probably just
> scaring them!! :-)
simpler is better (or perfect is the enemy of good enough)
if you do your sample over a few seconds (or few tens of seconds) things
will average out quite a bit.
the key is to be going for a reasonable starting point. after that then
the full analysis folks can start in with all their monitoring and
tuneing, but the 80/20 rule really applies here. 80% of the gain is from
getting 'fairly close' to the right values, and that should only be 20% of
the full 'tuneing project'
David Lang
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-05-04 02:12:50 | Re: Help tracking down error in postgres log |
Previous Message | Carlos Moreno | 2007-05-04 01:13:19 | Re: Feature Request --- was: PostgreSQL Performance Tuning |
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2007-05-04 02:37:57 | Re: Query performance problems with partitioned tables |
Previous Message | Carlos Moreno | 2007-05-04 01:13:19 | Re: Feature Request --- was: PostgreSQL Performance Tuning |