From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: including backend ID in relpath of temp rels - updated patch |
Date: | 2010-08-06 19:47:02 |
Message-ID: | AANLkTimWFYjOEnXg0i0SNESTBqfSkL9JnXAUybaWOCCg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Aug 6, 2010 at 2:43 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> On Fri, Aug 6, 2010 at 2:07 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>> Sure, it tops out somewhere, but 32K is way too close to configurations
>>> we know work well enough in the field (I've seen multiple reports of
>>> people using a couple thousand backends).
>
>> Well, I wouldn't expect anyone to use an exclusive lock for readers
>> without a good reason, but you still have n backends that each have to
>> read, presumably, about O(n) messages, so eventually that's going to
>> start to pinch.
>
> Sure, but I don't see much to be gained from multiple queues either.
> There are few (order of zero, in fact) cases where sinval messages
> are transmitted that aren't of potential interest to all backends.
> Maybe you could do something useful with a very large number of
> dynamically-defined queues (like one per relation) ... but managing that
> would probably swamp any savings.
Well, what I was thinking is that if you could guarantee that a
certain backend COULDN'T have a particular relfilenode open at the
smgr level, for example, then it needn't read the invalidation
messages for that relfilenode. Precisely how to slice that up is
another matter. For the present case, for instance, you could
creating one queue per backend. In the normal course of events, each
backend would subscribe only to its own queue, but if one backend
wanted to drop a temporary relation belonging to some other backend,
it would temporarily subscribe to that backend's queue, do whatever it
needed to do, and then flush all the smgr references before
unsubscribing from the queue. That's a bit silly because we doubtless
wouldn't invent such a complicated mechanism just for this case, but I
think it illustrates the kind of thing that one could do. Or if you
wanted to optimize for the case of a large number of databases running
in a single cluster, perhaps you'd want one queue per database plus a
shared queue for the shared catalogs. I don't know. This is just pie
in the sky.
>> Do you think it's worth worrying about the reduction in the number of
>> possible SI message types?
>
> IIRC the number of message types is the number of catalog caches plus
> half a dozen or so. We're a long way from exhausting even a 1-byte
> ID field; and we could play more games if we had to, since there would
> be a padding byte free in the message types that refer to a catalog
> cache. IOW, 1-byte id doesn't bother me.
OK.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2010-08-06 19:53:55 | Re: [HACKERS] postgres 9.0 crash when bringing up hot standby |
Previous Message | Robert Haas | 2010-08-06 19:32:21 | Re: including backend ID in relpath of temp rels - updated patch |