From: | Michael Fuhr <mike(at)fuhr(dot)org> |
---|---|
To: | Bohdan Linda <bohdan(dot)linda(at)seznam(dot)cz> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Partial commit within the trasaction |
Date: | 2005-09-09 14:36:15 |
Message-ID: | 20050909143615.GA10102@winnie.fuhr.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Sep 08, 2005 at 05:35:40PM +0200, Bohdan Linda wrote:
> On Thu, Sep 08, 2005 at 04:35:51PM +0200, Michael Fuhr wrote:
> > On Thu, Sep 08, 2005 at 03:39:50PM +0200, Bohdan Linda wrote:
> > commit it now." You have to do some extra bookkeeping and you can't
> > commit several prepared transactions atomically (as far as I know),
> > but that's one way you could make changes durable without actually
> > committing them until later.
>
> In case of durable transactions, would they be released from memory? Thus
> could the transaction be more respectfull to the HW when processing too
> much data?
I'll defer comments on the memory usage of transactions to the
developers, but in general transactions shouldn't have memory
problems. In an earlier message you said that the "db aborted
processing such huge updates with out of memory message" -- can you
elaborate on that? What were you doing that you think caused the
out of memory error? That sounds like the underlying problem that
needs to be solved.
> And what about nested transactions? Are they planned? The point is
> connected to my previous question of the secured access to stored
> procedures. If I move part of database logic to the client, I will have to
> introduce parameters to the procedures. This may be potentialy abusable.
What exactly do you mean by "nested transactions"? PostgreSQL 8.x
has savepoints but I'm guessing you mean something else, like perhaps
the ability to begin and end transactions within a function. The
developers' TODO list has an item that says "Add capability to
create and call PROCEDURES," which might be what you're really
after.
> If I try to use dblink from server to server (both are the same), is there
> some perfromance penalty? How big?
That depends on how much you do over the dblink connection. If you
execute many statements that each do only a little work then you'll
have a lot of overhead. On the other hand, if you execute statements
that each do a lot of work then the overhead will be minimal.
--
Michael Fuhr
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2005-09-09 14:39:58 | Re: constraints on composite types |
Previous Message | Douglas McNaught | 2005-09-09 14:16:07 | Re: FW: Configuring Postgres to use unix sockets |