From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "John D(dot) Burger" <john(at)mitre(dot)org> |
Cc: | "pgsql-general postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Limit on number of users in postgresql? |
Date: | 2007-01-29 19:48:38 |
Message-ID: | 18652.1170100118@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"John D. Burger" <john(at)mitre(dot)org> writes:
> Why doesn't the postmaster read the db files directly, presumably
> using some of the same code the backends do, or is too hard to bypass
> the shared memory layer?
It's not "too hard", it's simply wrong. The copy on disk may be out of
date due to not having been flushed from shared buffers yet. Moreover,
without any locking you can't ensure you get a consistent view of the
data.
> Another thing you folks must have
> considered would be to keep the out-of-memory copies of this kind of
> data in something faster than a flat file - say Berkeley DB. Do
> either of these things make sense?
If I were going to do anything about this, I'd think about teaching the
postmaster about some kind of incremental-update protocol instead of
rereading the whole flat file every time. The issue with any such idea
is that it pushes complexity, and therefore risk of bugs, into the
postmaster which is exactly where we can't afford bugs. Given the lack
of actual performance complaints from the field so far, I'm not inclined
to do anything for now ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Shoaib Mir | 2007-01-29 19:54:45 | Locking question? |
Previous Message | Alvaro Herrera | 2007-01-29 19:38:32 | Re: Limit on number of users in postgresql? |