From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)2ndquadrant(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Kevin Grittner <kgrittn(at)ymail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: alter_table regression test problem |
Date: | 2013-11-13 15:59:22 |
Message-ID: | CA+TgmoZasXcRxT4Or18_DMjtxbXmn87YyBhHFmogC=8DOaAs+w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Nov 11, 2013 at 4:34 PM, Andres Freund <andres(at)2ndquadrant(dot)com> wrote:
>>I'm pretty sure that the current coding, which blows away the whole
>>relation, is used in other places, and I really don't see why it
>>should be fundamentally flawed, or why we should change it to clear
>>the cache entries out one by one instead of en masse.
>>RelidByRelfilenode definitely needs to use HASH_FIND rather than
>>HASH_ENTER, so that part I agree with.
>
> It surely is possible to go that route, but imagine what happens if the heap_open() blows away the entire hash. We'd either need to recheck if the hash exists before entering or recreate it after dropping. It seemed simpler to follow attoptcache's example.
I'm not sure if this is the best way forward, but I don't feel like
arguing about it, either, so committed.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Noah Misch | 2013-11-13 16:04:12 | Re: ERROR during end-of-xact/FATAL |
Previous Message | Kevin Grittner | 2013-11-13 15:35:51 | Re: Clang 3.3 Analyzer Results |