From: | Mike Mascari <mascarm(at)mascari(dot)com> |
---|---|
To: | Michael Chaney <mdchaney(at)michaelchaney(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Moving from MySQL to PGSQL....some questions (multilevel |
Date: | 2004-03-05 09:04:37 |
Message-ID: | 404842A5.20005@mascari.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Michael Chaney wrote:
> On Thu, Mar 04, 2004 at 10:50:50AM -0500, Tom Lane wrote:
>
>>If I understood the requirements correctly, it might be sufficient to
>>put a unique index on (id1,id2). If two transactions simultaneously try
>>to insert for the same id1, one would get a duplicate-index-entry
>>failure, and it would have to retry. The advantage is you take no
>>table-wide lock. So if the normal usage pattern involves lots of
>>concurrent inserts for different id1 values, you'd come out ahead.
>>Whether that applies, or is worth the hassle of a retry loop in the
>>application, I can't tell from the info we've been given.
>
>
> Not a bad idea, but probably best to move it into a stored procedure in
> that case.
But there isn't any exception handling - the duplicate-index-entry
failure will abort the procedure and return to the client with an
error. The only place to loop would be in the client AFAICS.
Mike Mascari
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2004-03-05 09:26:24 | Re: sum of a time column |
Previous Message | Joe Conway | 2004-03-05 06:11:56 | Re: [GENERAL] dblink: rollback transaction |