From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | Merrick <merrick(at)gmail(dot)com> |
Cc: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: 1 Sequence per Row i.e. each customer's first order starts at 1 |
Date: | 2009-07-02 09:28:47 |
Message-ID: | 407d949e0907020228y16ff33ebj5734624dd06b272a@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Jul 2, 2009 at 2:46 AM, Merrick<merrick(at)gmail(dot)com> wrote:
> I was hoping there would be a way to add a field the sequence table
> postgresql automatically generates so I could rely on whatever
> mechanism postgresql uses to avoid the problems described thus far.
Hm, well you could create a sequence for every customer. I don't think
that's the way I would go but in theory it should work. Having
thousands or millions of sequences could make dealing with things like
pg_dump kind of a challenge though.
> I should have included more info, it's highly likely that multiple
> users will be accessing using same customer_id when creating orders
> thus deadlocks would be an issue I would like to avoid.
Well deadlocks are only a problem if you're not careful how you
acquire the locks. If you keep the transaction short and this is the
only reason you lock the customer record then you won't get a
deadlock. Just make sure you acquire the lock before you acquire any
weaker lock such as from referential integrity checks from inserting a
record which refers to that customer.
If it's part of a long transaction which acquires locks on multiple
customers then you have to be concerned with what order the locks are
acquired.
> On Wed, Jul 1, 2009 at 6:01 PM, Scott Marlowe<scott(dot)marlowe(at)gmail(dot)com> wrote:
>> The simplest method is to do something like:
>>
>> begin;
>> select * from sometable where cust_id=99 order by order_id desc for update;
>>
>> to lock all the customer records for cust_id 99, then take the first
>> record, which should have the highest order_id, grab that increment it
>> and then insert the new record and commit; the transaction. Assuming
>> your customers aren't ordering dozens of things a second, this should
>> work with minimal locking contention.
I don't like the idea of locking all the order records. That sounds
like it would lead to even more contention than locking just the
customer record. There could be thousands of order records to lock all
over the order table.
Also, this seems like it would have deadlocks risks no matter how you
wrote the code. Since you can't control the order that the locking
would take place. If you happened to get a customer with a significant
number of orders you'll get a bitmap heap scan or even a sequential
scan and that will potentially lock the records in a different order
than your order by. If someone else previously locked them when the
stats reflected fewer records they would have taken the locks in a
different order.
It seems to me that locking all the order records here is really just
a proxy for locking their parent customer record and that would work
better anyways.
You could avoid the update to the customer record by combining these
two strategies though. Instead of updating a last_order_num field in
customer do something like this:
begin;
select * from customer where customer_id=:0 for update;
select max(order_num) from orders where customer_id = :0
insert into orders (order_num,...) values (:0 + 1, ...)
commit;
This uses the lock on customer to protect your insert against someone
else inserting the same order_num but doesn't actually update the
customer table.
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2009-07-02 09:35:57 | Re: 1 Sequence per Row i.e. each customer's first order starts at 1 |
Previous Message | drestajumena | 2009-07-02 09:28:19 | Re: The connection closed unexpectedly - C++ and Postgre |