Re: limiting performance impact of wal archiving.

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Laurent Laborde <kerdezixe(at)gmail(dot)com>
Cc: Greg Smith <greg(at)2ndquadrant(dot)com>, Ivan Voras <ivoras(at)freebsd(dot)org>, pgsql-performance(at)postgresql(dot)org
Subject: Re: limiting performance impact of wal archiving.
Date: 2009-11-10 17:01:30
Message-ID: dcc563d10911100901i3af595dr71920fe99c1f9f70@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, Nov 10, 2009 at 9:52 AM, Laurent Laborde <kerdezixe(at)gmail(dot)com> wrote:
> On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
>> disks (RAID1) are the two WAL setups that work well, and if I have a bunch
>> of drives I personally always prefer a dedicated drive mainly because it
>> makes it easy to monitor exactly how much WAL activity is going on by
>> watching that drive.

I do the same thing for the same reasons.

> On the "new" slave i have 6 disk in raid-10 and 2 disk in raid-1.
> I tought about doing the same thing with the master.

It would be a worthy change to make. As long as there's no heavy log
write load on the RAID-1 put the pg_xlog there.

>> Generally if checkpoints and archiving are painful, the first thing to do is
>> to increase checkpoint_segments to a very high amount (>100), increase
>> checkpoint_timeout too, and push shared_buffers up to be a large chunk of
>> memory.
>
> Shared_buffer is 2GB.

On some busy systems with lots of small transactions large
shared_buffer can cause it to run slower rather than faster due to
background writer overhead.

> I'll reread domcumentation about checkpoint_segments.
> thx.

Note that if you've got a slow IO subsystem, a large number of
checkpoint segments can result in REALLY long restart times after a
crash, as well as really long waits for shutdown and / or bgwriter
once you've filled them all up.

>> You never want to use LVM under Linux if you care about performance.  It
>> adds a bunch of overhead that drops throughput no matter what, and it's
>> filled with limitations.  For example, I mentioned write barriers being one
>> way to interleave WAL writes without other types without having to write the
>> whole filesystem cache out.  Guess what:  they don't work at all regardless
>> if you're using LVM.  Much like using virtual machines, LVM is an approach
>> only suitable for low to medium performance systems where your priority is
>> easier management rather than speed.
>
> *doh* !!
> Everybody told me "nooo ! LVM is ok, no perceptible overhead, etc ...)
> Are you 100% about LVM ? I'll happily trash it :)

Everyone who doesn't run databases thinks LVM is plenty fast. Under a
database it is not so quick. Do your own testing to be sure, but I've
seen slowdowns of about 1/2 under it for fast RAID arrays.

>> Given the current quality of Linux code, I hesitate to use anything but ext3
>> because I consider that just barely reliable enough even as the most popular
>> filesystem by far.  JFS and XFS have some benefits to them, but none so
>> compelling to make up for how much less testing they get.  That said, there
>> seem to be a fair number of people happily running high-performance
>> PostgreSQL instances on XFS.
>
> Thx for the info :)

Note that XFS gets a LOT of testing, especially under linux. That
said it's still probably only 1/10th as many dbs (or fewer) as those
running on ext3 on linux. I've used it before and it's a little
faster than ext3 at some stuff, especially deleting large files (or in
pg's case lots of 1G files) which can make ext3 crawl.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Craig James 2009-11-10 17:07:14 Re: limiting performance impact of wal archiving.
Previous Message Laurent Laborde 2009-11-10 16:52:00 Re: limiting performance impact of wal archiving.