From: | Erik Jones <erik(at)myemma(dot)com> |
---|---|
To: | Chris Browne <cbbrowne(at)acm(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: why postgresql over other RDBMS |
Date: | 2007-05-25 14:47:23 |
Message-ID: | D7C2606C-357E-45DA-A6A0-4F5DD3768BBD@myemma.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On May 24, 2007, at 5:21 PM, Chris Browne wrote:
> agentm(at)themactionfaction(dot)com ("A.M.") writes:
>> On May 24, 2007, at 14:29 , Wiebe Cazemier wrote:
>>
>>> On Thursday 24 May 2007 17:30, Alexander Staubo wrote:
>>>
>>>> [2] Nobody else has this, I believe, except possibly Ingres and
>>>> NonStop SQL. This means you can do a "begin transaction", then
>>>> issue
>>>> "create table", "alter table", etc. ad nauseum, and in the mean
>>>> time
>>>> concurrent transactions will just work. Beautiful for atomically
>>>> upgrading a production server. Oracle, of course, commits after
>>>> each
>>>> DDL statements.
>>>
>>> If this is such a rare feature, I'm very glad we chose postgresql.
>>> I use it all
>>> the time, and wouldn't know what to do without it. We circumvented
>>> Ruby on
>>> Rails' migrations, and just implemented them in SQL. Writing
>>> migrations is a
>>> breeze this way, and you don't have to hassle with atomicity, or
>>> the pain when
>>> you discover the migration doesn't work on the production server.
>>
>> Indeed. Wouldn't it be a cool feature to persists transaction states
>> across connections so that a new connection could get access to a
>> sub-
>> transaction state? That way, you could make your schema changes and
>> test them with any number of test clients (which designate the state
>> to connect with) and then you would commit when everything works.
>>
>> Unfortunately, the postgresql architecture wouldn't lend itself well
>> to this. Still, it seems like a basic extension of the notion of sub-
>> transactions.
>
> Jan Wieck had a proposal to a similar effect, namely to give some way
> to get one connection to duplicate the state of another one.
>
> This would permit doing a neat parallel decomposition of pg_dump: you
> could do a 4-way parallelization of it that would function something
> like the following:
>
> - connection 1 opens, establishes the usual serialized mode
> transaction
>
> - connection 1 dumps the table metadata into one or more files in a
> specified directory
>
> - then it forks 3 more connections, and seeds them with the same
> serialized mode state
>
> - it then goes thru and can dump 4 tables concurrently at a time,
> one apiece to a file in the directory.
>
> This could considerably improve speed of dumps, possibly of restores,
> too.
>
> Note that this isn't related to subtransactions...
Interesting. That's actually pretty close to the reindexing strategy/
script that I use and I've been planning on extending it to a vacuum
strategy. So, I will add my support into someone building this kind
of support into pg_dump/restore.
Erik Jones
Software Developer | Emma®
erik(at)myemma(dot)com
800.595.4401 or 615.292.5888
615.292.0777 (fax)
Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Fitzpatrick | 2007-05-25 15:17:50 | Referencing any field in a trigger |
Previous Message | Scott Marlowe | 2007-05-25 14:45:05 | Re: Delete with subquery deleting all records |