From: | Jan Wieck <JanWieck(at)Yahoo(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Gregory Stark <stark(at)enterprisedb(dot)com>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Proposal: Snapshot cloning |
Date: | 2007-01-26 17:39:28 |
Message-ID: | 45BA3CD0.8020905@Yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 1/26/2007 11:58 AM, Tom Lane wrote:
> Jan Wieck <JanWieck(at)Yahoo(dot)com> writes:
>> On 1/26/2007 8:06 AM, Gregory Stark wrote:
>>> It seems simpler to have a current_snapshot() function that returns an bytea
>>> or a new snapshot data type which set_current_snapshot(bytea) took to change
>>> your snapshot. Then you could use tables or out-of-band communication to pass
>>> around your snapshots however you please.
>>>
>>> set_current_snapshot() would have to sanity check that the xmin of the new
>>> snapshot isn't older than the current globaloldestxmin.
>
>> That would solve the backend to backend IPC problem nicely.
>
> But it fails on the count of making sure that globaloldestxmin doesn't
> advance past the snap you want to use. And exactly how will you pass
> a snap through a table? It won't become visible until you commit ...
> whereupon your own xmin isn't blocking the advance of globaloldestxmin.
The client receives the snapshot information as a result from the
function call to current_snapshot(). The call to
set_current_snapshot(snap) errors out if snap's xmin is older than
globaloldestxmin. It is the client app that has to make sure that the
transaction that created snap is still in progress.
I didn't say passing anything through a table.
Take a modified pg_dump as an example. It could write multiple files. A
pre-load sql with the first part of the schema. Then a post-load sql
with the finalization of same (creating indexes, adding constraints). It
then builds a list of all relations to COPY, starts n threads each
writing a different file. Each thread connects to the DB and adjusts the
snapshot to the one of the main transaction (which is still open). Then
each thread grabs the next table to dump from the list and writes the
COPY data to its output file. The threads exit when the list of tables
is empty. The main thread waits until the last thread has joined and
commits the main transaction.
Wouldn't be too hard to write a script that restores that split dump in
parallel as well.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #
From | Date | Subject | |
---|---|---|---|
Next Message | Gregory Stark | 2007-01-26 17:41:26 | Recursive query syntax ambiguity |
Previous Message | Tom Lane | 2007-01-26 17:38:36 | Re: Piggybacking vacuum I/O |