Re: New to concurrency

From: "Joris Dobbelsteen" <Joris(at)familiedobbelsteen(dot)nl>
To: "John D(dot) Burger" <john(at)mitre(dot)org>, "Postgres General" <pgsql-general(at)postgresql(dot)org>
Subject: Re: New to concurrency
Date: 2007-04-07 13:59:17
Message-ID: 73427AD314CC364C8DF0FFF9C4D693FF037A7C@nehemiah.joris2k.local
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

>-----Original Message-----
>From: pgsql-general-owner(at)postgresql(dot)org
>[mailto:pgsql-general-owner(at)postgresql(dot)org] On Behalf Of John D. Burger
>Sent: zaterdag 7 april 2007 2:04
>To: Postgres General
>Subject: [GENERAL] New to concurrency
>
>For the first time, I find myself wanting to use some of PG's
>concurrency control stuff, and I could use some advice.
>
>I have requests showing up in a database, and I have one or
>more servers picking these up with listen/notice. The
>requests go into a table with a status column, which is
>initially NULL, so a server finds requests to process like so:
>
> begin;
> select id from requests where status is null order by ts limit 1;
>
>(ts is a timestamp column - process the earliest request first.)
>
>A server takes responsibility for a request by setting the
>status for the id it got with the previous query:
>
> update requests set status = 'start' where qid = 42 and
>status is null;
> commit;
>
>My client library (Python pgdb) lets me know how many rows got
>updated, in this case 1, if the server set the status, or 0 if
>that request's status is no longer NULL, like if some other
>server picked it up in the interim.
>
>Now I know that this last bit is not really any protection
>against two servers both thinking they should process the same
>request. I suspect I want to use SELECT FOR UPDATE in the
>first query - will that be sufficient in this situation?

That should indeed work. SELECT FOR UPDATE will take an exclusive lock
on the row, conflicting with any other lock. There could be some
resource contention though...

There are a few more tricks and this must be handled with extreme care.
Obviously the most simple solution is to have a single agent handle
dispatching of tasks, making this at least less prone to mistakes or
oversights.

BEGIN;
CREATE TABLE request (
id serial NOT NULL PRIMARY KEY,
status character varying(10) DEFAULT NULL
);
INSERT INTO request(status) VALUES (NULL);
INSERT INTO request(status) VALUES (NULL);
INSERT INTO request(status) VALUES (NULL);
INSERT INTO request(status) VALUES (NULL);
INSERT INTO request(status) VALUES (NULL);
INSERT INTO request(status) VALUES (NULL);
INSERT INTO request(status) VALUES (NULL);
INSERT INTO request(status) VALUES (NULL);
INSERT INTO request(status) VALUES (NULL);
INSERT INTO request(status) VALUES (NULL);
INSERT INTO request(status) VALUES (NULL);
INSERT INTO request(status) VALUES (NULL);
COMMIT;

Now we start the request with status NULL and the earliest ID (similar
to your timestamp). We do this for two transactions: T1 and T2...

T1: BEGIN;
T2: BEGIN;
T1: SELECT id FROM request WHERE status IS NULL ORDER BY id LIMIT 1 FOR
UPDATE;
T2: SELECT id FROM request WHERE status IS NULL ORDER BY id LIMIT 1 FOR
UPDATE;
-- T2 will block now...
T1: UPDATE request SET status = 'start' WHERE id = 1;
T1: COMMIT;
T2: Will return no value at this point (8.2.1), so T2 will have to
retry.

If you want to avoid retries a slightly different approach is required.
You can do it similar to how its normally done in other software: Take a
'global' exclusive lock for dequeuing. Another table or row can be
designated for synchronization (do SELECT FOR UPDATE or LOCK TABLE
EXCLUSIVE on it).
If you have only low volume, even a exclusive table lock will work.
I'm not sure whether you can do it correctly with an advisory lock, as
its not released at commit time, but require you to do so explicitly.
This seems to give a risk of requiring a retry or an additional
transaction.
Note that this requires READ COMMITED isolation level, a serializable
lock will result in 'can't serialize' errors and one must do retries.

Probably you want to wrap it up in a stored procedure (plpgsql), as this
makes the process go much faster due to a reduction in round-trips.
Fewer round-trips mean less time is required during which the lock is
held, reducing contention.

Obviously the database is still no match for an purposely build
dispatcher, as it can control its locks with much finer (time)
granularity and the overhead is much much smaller (e.g. transaction
setup/commit and relative complex locking protocol).

- Joris

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Michael Fuhr 2007-04-07 15:47:46 Re: performance; disk bad or something?
Previous Message Simon István 2007-04-07 13:27:27 PlPg/SQL mulit-array as a parameter?