From: | Jim Nasby <jim(at)nasby(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Merlin Moncure <mmoncure(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: hint bit cache v6 |
Date: | 2011-06-30 04:31:35 |
Message-ID: | E04B302F-6ED3-4907-A250-F7E5D5821266@nasby.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Jun 29, 2011, at 3:18 PM, Robert Haas wrote:
> To be clear, I don't really think it matters how sensitive the cache
> is to a *complete* flush. The question I want to ask is: how much
> does it take to knock ONE page out of cache? And what are the chances
> of that happening too frequently? It seems to me that if a run of 100
> tuples with the same previously-unseen XID is enough to knock over the
> applecart, then that's not a real high bar - you could easily hit that
> limit on a single page. And if that isn't enough, then I don't
> understand the algorithm.
Would it be reasonable to keep a second level cache that store individual XIDs instead of blocks? That would provide protection for XIDs that are extremely common but don't have a good fit with the pattern of XID ranges that we're caching. I would expect this to happen if you had a transaction that touched a bunch of data (ie: bulk load or update) some time ago (so the other XIDs around it are less likely to be interesting) but not old enough to have been frozen yet. Obviously you couldn't keep too many XIDs in this secondary cache, but if you're just trying to prevent certain pathological cases then hopefully you wouldn't need to keep that many.
--
Jim C. Nasby, Database Architect jim(at)nasby(dot)net
512.569.9461 (cell) http://jim.nasby.net
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2011-06-30 04:35:49 | Re: pgbench--new transaction type |
Previous Message | Jeff Janes | 2011-06-30 04:13:21 | Re: pgbench--new transaction type |