From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Ladislav Lenart <lenartlad(at)volny(dot)cz>, Chris Withers <chris(at)simplistix(dot)co(dot)uk>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: using a postgres table as a multi-writer multi-updater queue |
Date: | 2015-11-23 16:31:42 |
Message-ID: | 56533F6E.6020804@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 11/23/15 6:12 AM, Ladislav Lenart wrote:
> I suggest an excellent read on this topic:
>
> http://www.depesz.com/2013/08/30/pick-a-task-to-work-on/
>
> Highly recommended if you haven't read it yet.
One thing it doesn't mention that you need to be aware of is the vacuum
workload on a queue table. In a busy queue, it will be difficult or even
impossible for vacuum to keep the amount of dead rows down to something
manageable. That's why PgQ and Slony don't even attempt it; instead,
they rotate through a fixed set of tables. Once all the entries in a
table have been processed, the table is truncated.
If you go the delete route, make sure you don't index any fields in the
queue that get updated (otherwise you won't get HOT updates), and run a
very aggressive manual vacuum so the table stays small.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
From | Date | Subject | |
---|---|---|---|
Next Message | Andy Colson | 2015-11-23 18:22:07 | Re: using a postgres table as a multi-writer multi-updater queue |
Previous Message | Jim Nasby | 2015-11-23 16:19:29 | Re: Problems with pg_upgrade after change of unix user running db. |