From: | Heikki Linnakangas <heikki(at)enterprisedb(dot)com> |
---|---|
To: | Vladimir Stankovic <V(dot)Stankovic(at)city(dot)ac(dot)uk> |
Cc: | PostgreSQL Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Variable (degrading) performance |
Date: | 2007-06-12 18:20:51 |
Message-ID: | 466EE403.5080200@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Vladimir Stankovic wrote:
> What I am hoping to see is NOT the same value for all the executions of
> the same type of transaction (after some transient period). Instead, I'd
> like to see that if I take appropriately-sized set of transactions I
> will see at least steady-growth in transaction average times, if not
> exactly the same average. Each chunk would possibly include sudden
> performance drop due to the necessary vacuum and checkpoints. The
> performance might be influenced by the change in the data set too.
> I am unhappy about the fact that durations of experiments can differ
> even 30% (having in mind that they are not exactly the same due to the
> non-determinism on the client side) . I would like to eliminate this
> variability. Are my expectations reasonable? What could be the cause(s)
> of this variability?
You should see that if you define your "chunk" to be long enough. Long
enough is probably hours, not minutes or seconds. As I said earlier,
checkpoints and vacuum are a major source of variability.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Vivek Khera | 2007-06-12 18:24:59 | Re: Best use of second controller with faster disks? |
Previous Message | Christo Du Preez | 2007-06-12 17:53:00 | Re: test / live environment, major performance difference |