From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | a modest improvement to get_object_address() |
Date: | 2011-11-09 13:15:23 |
Message-ID: | CA+TgmoZ-_K7c5gGW_arz_pWhiWU6sW+vf8NQcaCrTDkqb=sndQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I'd like to propose the attached patch, which changes
get_object_address() in a manner similar to what we did in
RangeVarGetRelid() in commit 4240e429d0c2d889d0cda23c618f94e12c13ade7.
The basic idea is that, if we look up an object name, acquire the
corresponding lock, and then find that the object was dropped during
the lock wait, we retry the whole operation instead of emitting a
baffling error message. Example:
rhaas=# create schema x;
CREATE SCHEMA
rhaas=# begin;
BEGIN
rhaas=# drop schema x;
DROP SCHEMA
Then, in another session:
rhaas=# comment on schema x is 'doodle';
Then, in the first session:
rhaas=# commit;
COMMIT
At this point, the first session must error out. The current code
produces this:
ERROR: cache lookup failed for class 2615 object 16386 subobj 0
With the attached patch, you instead get:
ERROR: schema "x" does not exist
...which is obviously quite a bit nicer.
Also, if the concurrent transaction drops and creates the schema
instead of just dropping it, the new code will allow the operation to
succeed (with the expected results) rather than failing.
Objections?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachment | Content-Type | Size |
---|---|---|
objectaddress-retry.patch | application/octet-stream | 1.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2011-11-09 13:25:46 | Re: [COMMITTERS] pgsql: In COPY, insert tuples to the heap in batches. |
Previous Message | Robert Haas | 2011-11-09 13:01:30 | Re: Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers |