From: | Michael Brusser <michael(at)synchronicity(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Hitting the nfile limit |
Date: | 2003-07-04 19:03:55 |
Message-ID: | DEEIJKLFNJGBEMBLBAHCEEKJDFAA.michael@synchronicity.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> > I wonder how Postgres handles this situation.
> > (Or power outage, or any hard system fault, at this point)
>
> Theoretically we should be able to recover from this without loss of
> committed data (assuming you were running with fsync on). Is your QA
> person certain that the record in question had been written by a
> successfully-committed transaction?
>
He's saying that his test script did not write any new records, only
updated existing ones.
My uneducated guess on how update may work:
- create a clone record from the one to be updated
and update some field(s) with given values.
- write new record to the database and delete the original.
If this is the case, could it be that somewhere along these lines
postgres ran into problem and lost the record completely?
But all this should be done in a transaction, so... I don't know...
As for fsync, we currently go with whatever default value is,
same for wal_sync_method.
Does anyone has an estimate on performance penalty related to
turning fsync on?
Michael.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-07-04 19:18:43 | Re: PostgreSQL vs. MySQL |
Previous Message | Bruno Wolff III | 2003-07-04 18:58:03 | Re: Compile error in current cvs (~1230 CDT July 4) |