Re: Reducing Catalog Locking

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Reducing Catalog Locking
Date: 2014-10-31 13:48:52
Message-ID: 29652.1414763332@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On a related note, I've previously had the thought that it would be
> nice to have a "big DDL lock" - that is, a lock that prevents
> concurrent DDL without preventing anything else - so that pg_dump
> could get just that one lock and then not worry about the state of the
> world changing under it.

Hm ... how would that work exactly? Every DDL operation has to take
the BigDDLLock in shared mode, and then pg_dump takes it in exclusive
mode? That would preclude two pg_dumps running in parallel, which
maybe isn't a mainstream usage but still there's never been such a
restriction before. Parallel pg_dump might have an issue in particular.

But more to the point, this seems like optimizing pg_dump startup by
adding overhead everywhere else, which doesn't really sound like a
great tradeoff to me.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2014-10-31 13:54:21 Re: Reducing Catalog Locking
Previous Message Andres Freund 2014-10-31 13:46:31 Re: Missing FIN_CRC32 calls in logical replication code