From: | Kenneth Marshall <ktm(at)rice(dot)edu> |
---|---|
To: | Laurent Laborde <kerdezixe(at)gmail(dot)com> |
Cc: | PostgreSQL Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: limiting performance impact of wal archiving. |
Date: | 2009-11-10 13:41:24 |
Message-ID: | 20091110134124.GY10895@it.is.rice.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Nov 10, 2009 at 12:55:42PM +0100, Laurent Laborde wrote:
> Hi !
> We recently had a problem with wal archiving badly impacting the
> performance of our postgresql master.
> And i discovered "cstream", that can limite the bandwidth of pipe stream.
>
> Here is our new archive command, FYI, that limit the IO bandwidth to 500KB/s :
> archive_command = '/bin/cat %p | cstream -i "" -o "" -t -500k | nice
> gzip -9 -c | /usr/bin/ncftpput etc...'
>
>
> PS : While writing that mail, i just found that i could replace :
> cat %p | cstream -i "" ...
> with
> cstream -i %p ...
> *grins*
>
And here is a simple perl program that I have used for a similar
reason. Obviously, it can be adapted to your specific needs.
Regards,
Ken
----throttle.pl-------
#!/usr/bin/perl -w
require 5.0; # written for perl5, hasta labyebye perl4
use strict;
use Getopt::Std;
#
# This is an simple program to throttle network traffic to a
# specified KB/second to allow a restore in the middle of the
# day over the network.
#
my($file, $chunksize, $len, $offset, $written, $rate, $buf );
my($options, $blocksize, $speed, %convert, $inv_rate, $verbose);
%convert = ( # conversion factors for $speed,$blocksize
'', '1',
'w', '2',
'W', '2',
'b', '512',
'B', '512',
'k', '1024',
'K', '1024',
);
$options = 'vhs:r:b:f:';
#
# set defaults
#
$speed = '100k';
$rate = '5';
$blocksize = '120k'; # Works for the DLT drives under SunOS
$file = '-';
$buf = '';
$verbose = 0; # default to quiet
sub usage {
my($usage);
$usage = "Usage: throttle [-s speed][-r rate/sec][-b blksize][-f file][-v][-h]
(writes data to STDOUT)
-s speed max data rate in B/s - defaults to 100k
-r rate writes/sec - defaults to 5
-b size read blocksize - defaults to 120k
-f file file to read for input - defaults to STDIN
-h print this message
-v print parameters used
";
print STDERR $usage;
exit(1);
}
getopts($options) || usage;
if ($::opt_h || $::opt_h) {
usage;
}
usage unless $#ARGV < 0;
$speed = $::opt_s if $::opt_s;
$rate = $::opt_r if $::opt_r;
$blocksize = $::opt_b if $::opt_b;
$file = $::opt_f if $::opt_f;
#
# Convert $speed and $blocksize to bytes for use in the rest of the script
if ( $speed =~ /^(\d+)([wWbBkK]*)$/ ) {
$speed = $1 * $convert{$2};
}
if ( $blocksize =~ /^(\d+)([wWbBkK]*)$/ ) {
$blocksize = $1 * $convert{$2};
}
$inv_rate = 1/$rate;
$chunksize = int($speed/$rate);
$chunksize = 1 if $chunksize == 0;
if ($::opt_v || $::opt_v) {
print STDERR "speed = $speed B/s\nrate = $rate/sec\nblocksize = $blocksize B\nchunksize = $chunksize B\n";
}
# Return error if unable to open file
open(FILE, "<$file") or die "Cannot open $file: $!\n";
# Read data from stdin and write it to stdout at a rate based
# on $rate and $speed.
#
while($len = sysread(FILE, $buf, $blocksize)) {
#
# print out in chunks of $speed/$rate size to allow a smoother load
$offset = 0;
while ($len) {
$written = syswrite(STDOUT, $buf, $chunksize, $offset);
die "System write error: $!\n" unless defined $written;
$len -= $written;
$offset += $written;
#
# Now wait 1/$rate seconds before doing the next block
#
select(undef, undef, undef, $inv_rate);
}
}
close(FILE);
From | Date | Subject | |
---|---|---|---|
Next Message | Ivan Voras | 2009-11-10 14:05:36 | Re: limiting performance impact of wal archiving. |
Previous Message | Laurent Laborde | 2009-11-10 11:55:42 | limiting performance impact of wal archiving. |