From: | Csaba Nagy <nagy(at)ecircle-ag(dot)com> |
---|---|
To: | Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net> |
Cc: | PgSQL General ML <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Replaceing records |
Date: | 2003-09-04 10:17:35 |
Message-ID: | 1062670655.6718.125.camel@coppola.ecircle.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
[philosophical post regarding a missing feature of Postgres]
Hi all,
This is exactly the problem I'm facing right now, and I found there's no
good solution to this in postgres.
Basically I have a complex database operation, which spans a transaction
across multiple simple DB operations which can be also executed
atomically. The separate pieces must work also separately.
Now one of the pieces is updating a table with data if the row with the
given key exists, and inserts if not. There is a unique constraint on
the key. I found there's no way to avoid failed inserts because of
unique constraint violations, causing automatic roll-back of the running
transaction.
Now contention on insert has a quite high probability for this operation
in our application.
It's unacceptable to roll back and retry the whole transaction just
because this insert failed, partly because of performance (there's a lot
of stuff done before, and there are lots of threads/clustered machines
doing inserts at the same time, and constantly retrying would painfully
slow down things), partly because it would make our code a lot more
complex than it is already.
Locking is also a bad option, as this is about inserts, so you don't
have anything useful to lock, unless locking the whole table. Finally
I'm using this solution, because performance-wise is about the same as
retrying the transaction (in this particular case at least), but I'm
completely unhappy about this.
This problem would be easily solved if the current transaction would not
be automatically rolled back on the failed insert. Given this, it would
be as easy as trying the insert, and if fails, do the update.
I know that this feature is not an easy one, but I would like to point
out that it's really useful and it's one of the barriers for porting
complex applications to postgres, given that other databases have it
readily available.
Cheers,
Csaba.
On Thu, 2003-09-04 at 11:24, Ron Johnson wrote:
> On Thu, 2003-09-04 at 03:00, Alex wrote:
> > Hi,
> > MySQL has a nice feature that allows to call a replace rather insert
> > which will attempt to insert if record not present and replace if it does.
> >
> > Is there any similar feature ?
> >
> > Currently I run a select prior to any insert and then update or insert
> > depending on the result of the select. The problem here is that I just
> > doubled the queries. Doing so on a table with 5Mio rows and on 100k
> > inserts will take time and I would like to have an efficient way of
> > doing it.
>
> What if you try do the INSERT, and if it returns with a "key exists"
> error, do the UPDATE?
>
> Will the SELECT really slow things down that much, since the record
> will be in buffers after you touch it the 1st time?
>
> --
> -----------------------------------------------------------------
> Ron Johnson, Jr. ron(dot)l(dot)johnson(at)cox(dot)net
> Jefferson, LA USA
>
> "All machines, no matter how complex, are considered to be based
> on 6 simple elements: the lever, the pulley, the wheel and axle,
> the screw, the wedge and the inclined plane."
> Marilyn Vos Savant
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Childs | 2003-09-04 10:26:40 | Re: Optimizer picks an ineffient plan |
Previous Message | Paul Thomas | 2003-09-04 09:59:33 | Re: Tomcat Connection Pool? |