From: | Csaba Nagy <nagy(at)ecircle-ag(dot)com> |
---|---|
To: | Jochem van Dieten <jochemd(at)oli(dot)tudelft(dot)nl> |
Cc: | Postgres general mailing list <pgsql-general(at)postgresql(dot)org>, "Joost Kraaijeveld\"J(dot)Kraaijeveld\"(at)Askesis(dot)nl>" <email> |
Subject: | Re: [JDBC] Is what I want possible and if so how? |
Date: | 2006-07-21 16:46:34 |
Message-ID: | 1153500394.5683.304.camel@coppola.muc.ecircle.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Jochem,
> For a small number of processes and a large difference in time
> between the 'loookup' speed and the 'work' I have used a two-step
> process where you first get a batch of records and then try them
> all in rapid succession. In pseudocode:
>
> SELECT *
> FROM table
> WHERE condition
> LIMIT number_of_queue_processes + 1;
>
> LOOP;
> BEGIN;
> SELECT *
> FROM table
> WHERE condition AND pk = xxx
> LIMIT 1 FOR UPDATE NOWAIT;
>
> do something;
> COMMIT;
> END;
>
I decided to use the same schema here. The only improvement I can see to
it is to shuffle the batch in random order... that way the competing
processors get a higher chance to avoid collision. I have to see if this
really works out well once I get the chance to deploy it...
Thanks,
Csaba.
From | Date | Subject | |
---|---|---|---|
Next Message | Bill Moran | 2006-07-21 16:56:40 | Re: Impact of vacuum full... |
Previous Message | Erik Jones | 2006-07-21 16:40:32 | Re: Impact of vacuum full... |