From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Christopher Kings-Lynne" <chriskl(at)familyhealth(dot)com(dot)au> |
Cc: | "Barry Lind" <barry(at)xythos(dot)com>, "Karel Zak" <zakkr(at)zf(dot)jcu(dot)cz>, pgsql-hackers(at)postgresql(dot)org, "Neil Conway" <nconway(at)klamath(dot)dyndns(dot)org> |
Subject: | Re: 7.3 schedule |
Date: | 2002-04-13 15:46:01 |
Message-ID: | 15740.1018712761@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Christopher Kings-Lynne" <chriskl(at)familyhealth(dot)com(dot)au> writes:
> thought out way of predicting/limiting their size. (2) How the heck do
> you get rid of obsoleted cached plans, if the things stick around in
> shared memory even after you start a new backend? (3) A shared cache
> requires locking; contention among multiple backends to access that
> shared resource could negate whatever performance benefit you might hope
> to realize from it.
> I don't understand all these locking problems?
Searching the cache and inserting/deleting entries in the cache probably
have to be mutually exclusive; concurrent insertions probably won't work
either (at least not without a remarkably intelligent data structure).
Unless the cache hit rate is remarkably high, there are going to be lots
of insertions --- and, at steady state, an equal rate of deletions ---
leading to lots of contention.
This could possibly be avoided if the cache is not used for all query
plans but only for explicitly PREPAREd plans, so that only explicit
EXECUTEs would need to search it. But that approach also makes a
sizable dent in the usefulness of the cache to begin with.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2002-04-13 16:19:40 | Re: DROP COLUMN (was RFC: Restructuring pg_aggregate) |
Previous Message | Tom Lane | 2002-04-13 15:34:34 | Re: Suggestions please: names for function cachabilityattributes |