From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: [HACKERS] Priorities for 6.6 |
Date: | 1999-06-07 14:57:26 |
Message-ID: | 24776.928767446@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us> writes:
> ... Another idea
> is to send a signal to each backend that has marked a bit in shared
> memory saying it has written to a relation, and have the signal handler
> fsync all its dirty relations, set a finished bit, and have the
> postmaster then fsync pglog.
I do not think it's practical to expect any useful work to happen inside
a signal handler. The signal could come at any moment, such as when
data structures are being updated and are in a transient invalid state.
Unless you are willing to do a lot of fooling around with blocking &
unblocking the signal, about all the handler can safely do is set a flag
variable that will be examined somewhere in the backend main loop.
However, if enough information is available in shared memory, perhaps
the postmaster could do this scan/update/flush all by itself?
> Of course, we have to prevent flush of pglog by OS, perhaps by making a
> copy of the last two pages of pg_log before this and remove it after.
> If a backend starts up and sees that pg_log copy file, it puts that in
> place of the current last two pages of pg_log.
It seems to me that one or so disk writes per transaction is not all
that big a cost. Does it take much more than one write to update
pg_log, and if so why?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 1999-06-07 14:58:55 | Re: [HACKERS] 6.6 items |
Previous Message | ZEUGSWETTER Andreas IZ5 | 1999-06-07 14:56:09 | Re: [HACKERS] Open 6.5 items |