From: | Greg Smith <gsmith(at)gregsmith(dot)com> |
---|---|
To: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Load Distributed Checkpoints test results |
Date: | 2007-06-17 05:36:33 |
Message-ID: | Pine.GSO.4.64.0706160129370.10398@westnet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 15 Jun 2007, Gregory Stark wrote:
> But what you're concerned about is not OLTP performance at all.
It's an OLTP system most of the time that periodically gets unexpectedly
high volume. The TPC-E OLTP test suite actually has a MarketFeed
component to in it that has similar properties to what I was fighting
with. In a real-world Market Feed, you spec the system to survive a very
high volume day of trades. But every now and then there's some event that
causes volumes to spike way outside of any you would ever be able to plan
for, and much data ends up getting lost as a result from systems not being
able to keep up. A look at the 1987 "Black Monday" crash is informative
here: http://en.wikipedia.org/wiki/Black_Monday_(1987)
> But the point is you're concerned with total throughput and not response
> time. You don't have a fixed rate imposed by outside circumstances with
> which you have to keep up all the time. You just want to be have the
> highest throughput overall.
Actually, I think I care about reponse time more than you do. In a
typical data logging situation, there is some normal rate at which you
expect transactions to arrive. There's usually something memory-based
upsteam that can buffer a small amount of delay, so an occasional short
checkpoint blip can be tolerated. But if there's ever a really extended
one, you actually start losing data when the buffers overflow.
The last project I was working on, any checkpoint that caused a
transaction to slip for more than 5 seconds would cause a data loss. One
of the defenses against that happening is that you have a wicked fast
transaction rate to clear the buffer out when thing are going well, but by
no means is that rate the important thing--never having the response time
halt for so long that transactions get lost is.
> The good news is that this should be pretty easy to test though. The
> main competitor for DBT2 is BenchmarkSQL whose main deficiency is
> precisely the lack of support for the think times.
Maybe you can get something useful out of that one. I found the
performance impact of the JDBC layer in the middle so lowered overall
throughput and distanced me from what was happening that it blurred what
was going on.
--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2007-06-17 06:22:19 | Re: Maintaining cluster order on insert |
Previous Message | Robert Treat | 2007-06-17 04:30:15 | Re: Performance Monitoring |