From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Hard problem with concurrency |
Date: | 2003-02-17 04:51:49 |
Message-ID: | 87smunfr16.fsf@stark.dyndns.tv |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hm, odd, nobody mentioned this solution:
If you don't have a primary key already, create a unique index on the
combination you want to be unique. Then:
. Try to insert the record
. If you get a duplicate key error
then do update instead
No possibilities of duplicate records due to race conditions. If two people
try to insert/update at the same time you'll only get one of the two results,
but that's the downside of the general approach you've taken. It's a tad
inefficient if the usual case is updates, but certainly not less efficient
than doing table locks.
I'm not sure what you're implementing here. Depending on what it is you might
consider having a table of raw data that you _only_ insert into. Then you
process those results into a table with the consolidated data you're trying to
gather. I've usually found that's more flexible later because then you have
all the raw data in the database even if you only present a limited view.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Christopher Kings-Lynne | 2003-02-17 05:07:31 | Re: Hard problem with concurrency |
Previous Message | Kevin Brown | 2003-02-17 03:55:49 | Re: location of the configuration files |