From: | Zeugswetter Andreas IZ5 <Andreas(dot)Zeugswetter(at)telecom(dot)at> |
---|---|
To: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | AW: [HACKERS] fsynch of pg_log write.. |
Date: | 1999-06-25 14:10:19 |
Message-ID: | 219F68D65015D011A8E000006F8590C60267B3B5@sdexcsrv1.f000.d0188.sd.spardat.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> > committed". The problem is when a client is told something,
> > that is not true after a crash, which can happen if the second
> > flush is left out.
>
> But commercial db's do that. They return 'done' for every query, while
> they write they log files ever X seconds. We need to allow this. No
> reason to be more reliable than commercial db's by default. Or, at
> least we need to give them the option because the speed advantage is
> huge.
>
I agree, but commercial db's don't do that.
Oracle does not (only on Linux).
Informix only does it when you specially create the database
(create database dada with buffered log;) I always use it :-)
Informix has a log buffer, which is flushed at transaction commit
(unbuffered logging) or when the buffer is full (buffered logging).
None of them do any "every X seconds stuff".
Andreas
From | Date | Subject | |
---|---|---|---|
Next Message | Chris Bitmead | 1999-06-25 14:58:20 | Severe SUBSELECT bug in 6.5 CVS |
Previous Message | Bruce Momjian | 1999-06-25 13:55:49 | Re: [HACKERS] fsynch of pg_log write.. |