From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Dimitri Fontaine <dimitri(at)2ndquadrant(dot)fr> |
Cc: | Bruce Momjian <bruce(at)momjian(dot)us>, Magnus Hagander <magnus(at)hagander(dot)net>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Peter Eisentraut <peter_e(at)gmx(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_basebackup for streaming base backups |
Date: | 2011-01-20 17:56:39 |
Message-ID: | AANLkTikuZxy95qFCCUc9fvu13qSd6-C0=p=+vnvguYDt@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Jan 20, 2011 at 11:59 AM, Dimitri Fontaine
<dimitri(at)2ndquadrant(dot)fr> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> Also, it won't actually work unless the server has replication
>> configured (wal_level!=minimal, max_wal_senders>0, and possibly some
>> setting for wal_keep_segments), which has been the main point of the
>> naming discussion thus far. Now, you know what would be REALLY cool?
>> Making this work without any special advance configuration. Like if
>> we somehow figured out a way to make max_wal_senders unnecessary, and
>> a way to change wal_level without bouncing the server, so that we
>> could temporarily boost the WAL level from minimal to archive if
>> someone's running a backup.
>
> Not using max_wal_senders we're on our way, you "just" have to use the
> external walreceiver that Magnus the code for already. WAL level, I
> don't know that we have that already, but a big part of what this base
> backup tool is useful for is preparing a standby… so certainly you want
> to change that setup there.
Well, yeah, but it would be nice to also use it just to take a regular
old backup on a system that doesn't otherwise need replication.
I think that the basic problem with wal_level is that to increase it
you need to somehow ensure that all the backends have the new setting,
and then checkpoint. Right now, the backends get the value through
the GUC machinery, and so there's no particular bound on how long it
could take for them to pick up the new value. I think if we could
find some way of making sure that the backends got the new value in a
reasonably timely fashion, we'd be pretty close to being able to do
this. But it's hard to see how to do that.
I had some vague idea of creating a mechanism for broadcasting
critical parameter changes. You'd make a structure in shared memory
containing the "canonical" values of wal_level and all other critical
variables, and the structure would also contain a 64-bit counter.
Whenever you want to make a parameter change, you lock the structure,
make your change, bump the counter, and release the lock. Then,
there's a second structure, also in shared memory, where backends
report the value that the counter had the last time they updated their
local copies of the structure from the shared structure. You can
watch that to find out when everyone's guaranteed to have the new
value. If someone doesn't respond quickly enough, you could send them
a signal to get them moving. What would really be ideal is if you
could actually make this safe enough that the interrupt service
routine could do all the work, rather than just setting a flag. Or
maybe CHECK_FOR_INTERRUPTS(). If you can't make it safe enough to put
it in someplace pretty low-level like that, the whole idea might fall
apart, because it wouldn't be useful to have a way of doing this that
mostly works except sometimes it just sits there and hangs for a
really long time.
All pie in the sky at this point...
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Magnus Hagander | 2011-01-20 17:57:38 | Re: REVIEW: EXPLAIN and nfiltered |
Previous Message | Tom Lane | 2011-01-20 17:43:02 | Re: CommitFest wrap-up |