From: | "ktm(at)rice(dot)edu" <ktm(at)rice(dot)edu> |
---|---|
To: | Albe Laurenz <laurenz(dot)albe(at)wien(dot)gv(dot)at> |
Cc: | Jeff *EXTERN* <jeff(at)jefftrout(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Replaying 48 WAL files takes 80 minutes |
Date: | 2012-10-30 13:41:24 |
Message-ID: | 20121030134124.GL2872@aart.rice.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Oct 30, 2012 at 02:16:57PM +0100, Albe Laurenz wrote:
> ktm(at)rice(dot)edu wrote:
> >>> If you do not have good random io performance log replay is nearly
> >>> unbearable.
> >>>
> >>> also, what io scheduler are you using? if it is cfq change that to
> >>> deadline or noop.
> >>> that can make a huge difference.
> >>
> >> We use the noop scheduler.
> >> As I said, an identical system performed well in load tests.
>
> > The load tests probably had the "important" data already cached.
> Processing
> > a WAL file would involve bringing all the data back into memory using
> a
> > random I/O pattern.
>
> The database is way too big (1 TB) to fit into cache.
>
> What are "all the data" that have to be brought back?
> Surely only the database blocks that are modified by the WAL,
> right?
>
> Yours,
> Laurenz Albe
>
Right, it would only read the blocks that are modified.
Regards,
Ken
From | Date | Subject | |
---|---|---|---|
Next Message | Gabriele Bartolini | 2012-10-30 13:45:11 | Re: Seq scan on 10million record table.. why? |
Previous Message | Albe Laurenz | 2012-10-30 13:16:57 | Re: Replaying 48 WAL files takes 80 minutes |