From: | Scott Marlowe <smarlowe(at)g2switchworks(dot)com> |
---|---|
To: | pgsql-performance(at)lusis(dot)org |
Cc: | PGSQL Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Performance of pg_dump on PGSQL 8.0 |
Date: | 2006-06-14 16:44:10 |
Message-ID: | 1150303450.26538.9.camel@state.g2switchworks.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:
> -- this is the third time I've tried sending this and I never saw it get
> through to the list. Sorry if multiple copies show up.
>
> Hi all,
BUNCHES SNIPPED
> work_mem = 1048576 ( I know this is high but you should see some of our
> sorts and aggregates)
Ummm. That's REALLY high. You might want to consider lowering the
global value here, and then crank it up on a case by case basis, like
during nighttime report generation. Just one or two queries could
theoretically run your machine out of memory right now. Just put a "set
work_mem=1000000" in your script before the big query runs.
> We're inserting around 3mil rows a night if you count staging, info, dim
> and fact tables. The vacuum issue is a whole other problem but right now
> I'm concerned about just the backup on the current hardware.
>
> I've got some space to burn so I could go to an uncompressed backup and
> compress it later during the day.
That's exactly what we do. We just do a normal backup, and have a
script that gzips anything in the backup directory that doesn't end in
.gz... If you've got space to burn, as you say, then use it at least a
few days to see how it affects backup speeds.
Seeing as how you're CPU bound, most likely the problem is just the
compressed backup.
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2006-06-14 17:04:18 | Re: Which processor runs better for Postgresql? |
Previous Message | Tom Lane | 2006-06-14 15:51:47 | Re: Performance of pg_dump on PGSQL 8.0 |