From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andreas Pflug <pgadmin(at)pse-consulting(dot)de> |
Cc: | Bob Ippolito <bob(at)redivi(dot)com>, Mark Cotner <mcotner(at)yahoo(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: sustained update load of 1-2k/sec |
Date: | 2005-08-19 14:09:05 |
Message-ID: | 18000.1124460545@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Andreas Pflug <pgadmin(at)pse-consulting(dot)de> writes:
> Tom Lane wrote:
>> As far as the question "can PG do 1-2k xact/sec", the answer is "yes
>> if you throw enough hardware at it". Spending enough money on the
>> disk subsystem is the key ...
>>
> The 1-2k xact/sec for MySQL seems suspicious, sounds very much like
> write-back cached, not write-through, esp. considering that heavy
> concurrent write access isn't said to be MySQLs strength...
> I wonder if preserving the database after a fatal crash is really
> necessary, since the data stored sounds quite volatile; in this case,
> fsync=false might be sufficient.
Yeah, that's something to think about. If you do need full transaction
safety, then you *must* have a decent battery-backed-write-cache setup,
else your transaction commit rate will be limited by disk rotation
speed --- for instance, a single connection can commit at most 250 xacts
per second if the WAL log is on a 15000RPM drive. (You can improve this
to the extent that you can spread activity across multiple connections,
but I'm not sure you can expect to reliably have 8 or more connections
ready to commit each time the disk goes 'round.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Ron | 2005-08-19 14:54:57 | Re: sustained update load of 1-2k/sec |
Previous Message | Tom Lane | 2005-08-19 14:03:12 | Re: Finding bottleneck |