From: | Aidan Van Dyk <aidan(at)highrise(dot)ca> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, Erik Rijkers <er(at)xs4all(dot)nl>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: testing HS/SR - 1 vs 2 performance |
Date: | 2010-04-12 13:30:01 |
Message-ID: | 20100412133001.GH5554@oak.highrise.ca |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
* Robert Haas <robertmhaas(at)gmail(dot)com> [100412 07:10]:
> I think we need to investigate this more. It's not going to look good
> for the project if people find that a hot standby server runs two
> orders of magnitude slower than the primary.
Yes, it's not "good", but it's a known problem. We've had people
complaining that wal-replay can't keep up with a wal stream from a heavy
server.
The master producing the wal stream has $XXX seperate read/modify
processes working over the data dir, and is bottle-necked by the
serialized WAL stream. All the seek+read delays are parallized and
overlapping.
But on the slave (traditionally PITR slave, now also HS/SR), has al
lthat read-modify-write happening in a single thread fasion, meaning
that WAL record $X+1 waits until the buffer $X needs to modify is read
in. All the seek+read delays are serialized.
You can optimize that by keepdng more of them in buffers (shared, or OS
cache), but the WAL producer, by it's very nature being a
multi-task-io-load producing random read/write is always going to go
quicker than single-stream random-io WAL consumer...
a.
--
Aidan Van Dyk Create like a god,
aidan(at)highrise(dot)ca command like a king,
http://www.highrise.ca/ work like a slave.
From | Date | Subject | |
---|---|---|---|
Next Message | Aidan Van Dyk | 2010-04-12 13:46:49 | Re: testing HS/SR - 1 vs 2 performance |
Previous Message | Robert Haas | 2010-04-12 13:17:56 | Re: testing HS/SR - 1 vs 2 performance |