From: | Andy Colson <andy(at)squeakycode(dot)net> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: using a postgres table as a multi-writer multi-updater queue |
Date: | 2015-11-23 18:22:07 |
Message-ID: | 5653594F.3050906@squeakycode.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 11/23/2015 4:41 AM, Chris Withers wrote:
> Hi All,
>
> I wondered if any of you could recommend best practices for using a
> postgres table as a queue. Roughly speaking, 100-200 workers will vomit
> rows and rates of a few hundres per second into the table leaving the
> status as new and then as many workers as needed to keep up with the
> load will plough through the queue changing the status to something
> other than new.
>
> My naive implementation would be something along the lines of:
>
> CREATE TABLE event (
> ts timestamp,
> event char(40),
> status char(10),
> CONSTRAINT pkey PRIMARY KEY(ts, event)
> );
>
>
> ...with writers doing INSERT or COPY to get data into the table and
> readers doing something like:
>
> SELECT FOR UPDATE * FROM event WHERE status='new' LIMIT 1000;
>
> ...so, grabbing batches of 1,000, working on them and then setting their
> status.
>
> But, am I correct in thinking that SELECT FOR UPDATE will not prevent
> multiple workers selecting the same rows?
>
> Anyway, is this approach reasonable? If so, what tweaks/optimisations
> should I be looking to make?
>
> If it's totally wrong, how should I be looking to approach the problem?
>
> cheers,
>
> Chris
Have you tried Redis? Its really good at that sort of thing.
-Andy
From | Date | Subject | |
---|---|---|---|
Next Message | John R Pierce | 2015-11-23 18:54:58 | Re: using a postgres table as a multi-writer multi-updater queue |
Previous Message | Jim Nasby | 2015-11-23 16:31:42 | Re: using a postgres table as a multi-writer multi-updater queue |