From: | Kevin Brown <kevin(at)sysexperts(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Win32 Powerfail testing |
Date: | 2003-03-07 20:43:01 |
Message-ID: | 20030307204300.GY1833@filer |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Bruce Momjian wrote:
> The idea of using this on Unix is tempting, but Tatsuo is using a
> threaded backend, so it is a little easier to do. However, it would
> probably be pretty easy to write a file of modified file names that the
> checkpoint could read and open/fsync/close.
Even that's not strictly necessary -- we *do* have shared memory we
can use for this, and even when hundreds of tables have been written
the list will only end up being a few tens of kilobytes in size (plus
whatever overhead is required to track and manipulate the entries).
But even then, we don't actually have to track the *names* of the
files that have changed, just their RelFileNodes, since there's a
mapping function from the RelFileNode to the filename.
> Of course, if there are lots of files, sync() may be faster than
> opening/fsync/closing all those files.
This is true, and is something I hadn't actually thought of. So it
sounds like some testing would be in order.
Unfortunately I know of no system call which will take an array of
file descriptors (or file names! May as well go for the gold when
wishing for something :-) and sync them all to disk in the most
optimal way...
--
Kevin Brown kevin(at)sysexperts(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Neil Conway | 2003-03-07 20:53:01 | regression failure in CVS HEAD |
Previous Message | Fernando Schapachnik | 2003-03-07 20:08:44 | Re: division by zero |