From: | "James Riordan" <james(dot)riordan(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Dispatch-Merge pattern |
Date: | 2007-03-13 13:36:47 |
Message-ID: | eb4939fe0703130636r664d32e9qd8d3975635c7786c@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Howdy-
I am currently using PostgreSQL to store and process a high-bandwidth
event stream. I do not need old events but the delete and vacuum does
not terminate due to the large number of events being inserted (it
just pushes me over the tipping point of where the machine can keep up
with the events).
I ended up implementing a scheme where a trigger is used to redirect
the events (round robin based on time) to a series of identically
structured tables. I can then use TRUNCATE older tables rather than
DELETE and VACUUM (which is a significant speed up).
It worked out pretty well so thought post the idea to find out if
- it is stupid way of doing things and there is a correct database
abstraction for doing this
or
- it is a reasonable way of solving this problem and might be of use
to other folks using rdbs as event processing
I then use a view to merge the tables. Obviously update would be a
problem for my purposes, and I suppose a lot of event processing, it
isn't an issue.
Either way, details are at:
http://unsyntax.net/james/blog/tools+and+programming/2007/03/08/Dispatch-Merge-Database-Pattern
Cheers,
James
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2007-03-13 13:43:06 | Re: Execution plan changed after upgrade from 7.3.9 to 8.2.3 |
Previous Message | Richard Huxton | 2007-03-13 13:28:47 | Re: Execution plan changed after upgrade from 7.3.9 to 8.2.3 |