From: | Benjamin Smith <lists(at)benjamindsmith(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Cc: | Vick Khera <vivek(at)khera(dot)org> |
Subject: | Re: pg_dump makes our system unusable - any way to pg_dump in the middle of the day? (postgres 8.4.4) |
Date: | 2011-02-25 23:50:31 |
Message-ID: | 201102251550.31392.lists@benjamindsmith.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I'd also add: run pg_tune on your server. Made a *dramatic* difference for us.
On Friday, February 25, 2011 05:26:56 am Vick Khera wrote:
> On Thu, Feb 24, 2011 at 6:38 PM, Aleksey Tsalolikhin
>
> <atsaloli(dot)tech(at)gmail(dot)com> wrote:
> > In practice, if I pg_dump our 100 GB database, our application, which
> > is half Web front end and half OLTP, at a certain point, slows to a
> > crawl and the Web interface becomes unresponsive. I start getting
> > check_postgres complaints about number of locks and query lengths. I
> > see locks around for over 5 minutes.
>
> I'd venture to say your system does not have enough memory and/or disk
> bandwidth, or your Pg is not tuned to make use of enough of your
> memory. The most likely thing is that you're saturating your disk
> I/O.
>
> Check the various system statistics from iostat and vmstat to see what
> your baseline load is, then compare that when pg_dump is running. Are
> you dumping over the network or to the local disk as well?
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
From | Date | Subject | |
---|---|---|---|
Next Message | Craig Ringer | 2011-02-26 01:24:36 | Re: PostgreSQL database design for a large company |
Previous Message | Sergey Burladyan | 2011-02-25 23:35:53 | Re: database is bigger after dump/restore - why? (60 GB to 109 GB) |