From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
Cc: | "Keith C(dot) Perry" <netadmin(at)vcsn(dot)com>, Stephen Robert Norris <srn(at)commsecure(dot)com(dot)au>, satish satish <satish_ach2003(at)yahoo(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Data Corruption in case of abrupt failure |
Date: | 2004-03-17 16:41:31 |
Message-ID: | 25740.1079541691@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"scott.marlowe" <scott(dot)marlowe(at)ihs(dot)com> writes:
> On Tue, 16 Mar 2004, Tom Lane wrote:
>> What I'd suggest is to set up a simple test involving a long string of
>> very small transactions (a bunch of separate INSERTs into a table with
>> no indexes works fine). Time it twice, once with "fsync" enabled and
>> once without. If there's not a huge difference, your drive is lying.
> pgbench is a nice candidate for this.
> pgbench -c 100 -t 100000
I wouldn't do that, first because pgbench transactions are relatively
large (several updates per xact IIRC), and second because you'll be
measuring contention effects as well as pure WAL write activity.
If you simply must use pgbench for this, use -c 1 ... but it's surely
easy enough to make a file of a few thousand copies of
INSERT INTO foo VALUES(1);
and feed it to psql.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2004-03-17 17:17:44 | Re: Check constraint |
Previous Message | scott.marlowe | 2004-03-17 16:27:05 | Re: Data Corruption in case of abrupt failure |