From: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Goulet, Dick" <DGoulet(at)vicr(dot)com>, <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: hanging for 30sec when checkpointing |
Date: | 2004-02-10 16:51:24 |
Message-ID: | Pine.LNX.4.33.0402100950590.28531-100000@css120.ihs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Tue, 10 Feb 2004, Tom Lane wrote:
> "scott.marlowe" <scott(dot)marlowe(at)ihs(dot)com> writes:
> >> Unfortunately not --- at checkpoint time, the constraint goes the other
> >> way. We have to be sure all the data file updates are down to disk
> >> before we write a checkpoint record to the WAL log. So you can still
> >> get screwed if the data-file drive lies about write completion.
>
> > Hmmm. OK. Would the transaction size be an issue here? I.e. would small
> > transactions likely be safer against corruption than large transactions?
>
> Transaction size would make no difference AFAICS. Reducing the interval
> between checkpoints might make things safer in such a case.
>
> > I ask because most of the testing I did was with pgbench running 100+
> > simos (on a -s 100 pgbench database) and as long as the WAL drive was
> > fsyncing correctly, the database survived.
>
> Did you try pulling the plug immediately after a CHECKPOINT command
> completes? You could test by manually issuing a CHECKPOINT while
> pgbench runs, and yanking power as soon as the prompt comes back.
I will try that. Thanks for the tip. I'll let you know how it works
out.
From | Date | Subject | |
---|---|---|---|
Next Message | Oleg Bartunov | 2004-02-10 19:58:45 | Re: co existance of tsearch V1 and V2 in same database. |
Previous Message | scott.marlowe | 2004-02-10 16:42:34 | Re: Upgrading from 7.2 to 7.4.1 on Redhat 7 |