Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
> In the SSI patch, you'd also need a way to insert an existing
> struct into a hash table. You currently work around that by using
> a hash element that contains only the hash key, and a pointer to
> the SERIALIZABLEXACT struct. It isn't too bad I guess, but I find
> it a bit confusing.
Hmmm... Mucking with the hash table implementation to accommodate
that seems like it's a lot of work and risk for pretty minimal
benefit. Are you sure it's worth it? Perhaps better commenting
around the SERIALIZABLEXID structure to indicate it's effectively a
used for a non-primary key index into the other collection?
> Well, we generally try to avoid dynamic structures in shared
> memory, because shared memory can't be resized.
But don't HTAB structures go beyond their estimated sizes as needed?
I was trying to accommodate the situation where one collection
might not be anywhere near its limit, but some other collection has
edged past. Unless I'm misunderstanding things (which is always
possible), the current HTAB implementation takes advantage of the
"slush fund" of unused space to some degree. I was just trying to
maintain the same flexibility with the list.
I was thinking of returning a size based on the *maximum* allowed
allocations from the estimated size function, and actually limiting
it to that size. So it wasn't so much a matter of grabbing more
than expected, but leaving something for the hash table slush if
possible. Of course I was also thinking that this would allow one
to be a little bit more generous with he maximum, as it might have
benefit elsewhere...
> So, you'd typically use an array with a fixed number of elements.
That's certainly a little easier, if you think it's better.
> Any chance of collapsing together entries of already-committed
> transactions in the SSI patch, to put an upper limit on the number
> of shmem list entries needed? If you can do that, then a simple
> array allocated at postmaster startup will do fine.
I suspect it can be done, but I'm quite sure that any such scheme
would increase the rate of serialization failures. Right now I'm
trying to see how much I can do to *decrease* the rate of
serialization failures, so I'm not eager to go there. :-/ If it is
necessary, the most obvious way to manage this is just to force
cancellation of the oldest running serializable transaction and
running ClearOldPredicateLocks(), perhaps iterating, until we free
an entry to service the new request.
-Kevin