YAMAMOTO Takashi <yamt(at)mwd(dot)biglobe(dot)ne(dot)jp> wrote:
> thanks for quickly fixing problems.
Thanks for the rigorous testing. :-)
> i tested the later version
> (a2eb9e0c08ee73208b5419f5a53a6eba55809b92) and only errors i got
> was "out of shared memory". i'm not sure if it was caused by SSI
> activities or not.
> PG_DIAG_SEVERITY: WARNING
> PG_DIAG_SQLSTATE: 53200
> PG_DIAG_MESSAGE_PRIMARY: out of shared memory
> PG_DIAG_SOURCE_FILE: shmem.c
> PG_DIAG_SOURCE_LINE: 190
> PG_DIAG_SOURCE_FUNCTION: ShmemAlloc
>
> PG_DIAG_SEVERITY: ERROR
> PG_DIAG_SQLSTATE: 53200
> PG_DIAG_MESSAGE_PRIMARY: out of shared memory
> PG_DIAG_SOURCE_FILE: dynahash.c
> PG_DIAG_SOURCE_LINE: 925
> PG_DIAG_SOURCE_FUNCTION: hash_search_with_hash_value
Nor am I. Some additional information would help.
(1) Could you post the non-default configuration settings?
(2) How many connections are in use in your testing?
(3) Can you give a rough categorization of how many of what types
of transactions are in the mix?
(4) Are there any long-running transactions?
(5) How many of these errors do you get in what amount of time?
(6) Does the application continue to run relatively sanely, or does
it fall over at this point?
(7) The message hint would help pin it down, or a stack trace at
the point of the error would help more. Is it possible to get
either? Looking over the code, it appears that the only places that
SSI could generate that error, it would cancel that transaction with
the hint "You might need to increase
max_pred_locks_per_transaction." and otherwise allow normal
processing.
Even with the above information it may be far from clear where
allocations are going past their maximum, since one HTAB could grab
more than its share and starve another which is staying below its
"maximum". I'll take a look at the possibility of adding a warning
or some such when an HTAB expands past its maximum size.
-Kevin