From: | "ktm(at)rice(dot)edu" <ktm(at)rice(dot)edu> |
---|---|
To: | Albe Laurenz <laurenz(dot)albe(at)wien(dot)gv(dot)at> |
Cc: | Jeff *EXTERN* <jeff(at)jefftrout(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Replaying 48 WAL files takes 80 minutes |
Date: | 2012-10-30 13:05:33 |
Message-ID: | 20121030130533.GJ2872@aart.rice.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Oct 30, 2012 at 09:50:44AM +0100, Albe Laurenz wrote:
> >> On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz
> <laurenz(dot)albe(at)wien(dot)gv(dot)at> wrote:
> >>> I am configuring streaming replication with hot standby
> >>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
> >>> PostgreSQL was compiled from source.
> >>>
> >>> It works fine, except that starting the standby took for ever:
> >>> it took the system more than 80 minutes to replay 48 WAL files
> >>> and connect to the primary.
> >>>
> >>> Can anybody think of an explanation why it takes that long?
>
> Jeff Janes wrote:
> >> Could the slow log files be replaying into randomly scattered pages
> >> which are not yet in RAM?
> >>
> >> Do you have sar or vmstat reports?
>
> The sar reports from the time in question tell me that I read
> about 350 MB/s and wrote less than 0.2 MB/s. The disks were
> fairly busy (around 90%).
>
> Jeff Trout wrote:
> > If you do not have good random io performance log replay is nearly
> unbearable.
> >
> > also, what io scheduler are you using? if it is cfq change that to
> deadline or noop.
> > that can make a huge difference.
>
> We use the noop scheduler.
> As I said, an identical system performed well in load tests.
>
> The sar reports give credit to Jeff Janes' theory.
> Why does WAL replay read much more than it writes?
> I thought that pretty much every block read during WAL
> replay would also get dirtied and hence written out.
>
> I wonder why the performance is good in the first few seconds.
> Why should exactly the pages that I need in the beginning
> happen to be in cache?
>
> And finally: are the numbers I observe (replay 48 files in 80
> minutes) ok or is this terribly slow as it seems to me?
>
> Yours,
> Laurenz Albe
>
Hi,
The load tests probably had the "important" data already cached. Processing
a WAL file would involve bringing all the data back into memory using a
random I/O pattern. Perhaps priming the file cache using some sequential
reads would allow the random I/O to hit memory instead of disk. I may be
misremembering, but wasn't there an associated project/program that would
parse the WAL files and generate cache priming reads?
Regards,
Ken
From | Date | Subject | |
---|---|---|---|
Next Message | Albe Laurenz | 2012-10-30 13:10:24 | Re: Replaying 48 WAL files takes 80 minutes |
Previous Message | Cesar Martin | 2012-10-30 12:54:23 | High %SYS CPU usage |