From: | Sean Chittenden <sean(at)chittenden(dot)org> |
---|---|
To: | Dmitry Tkach <dmitry(at)openratings(dot)com> |
Cc: | pg_general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Postgresql "FIFO" Tables, How-To ? |
Date: | 2003-07-16 21:16:46 |
Message-ID: | 20030716211646.GJ24507@perrin.int.nxad.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> >That doesn't help you limit the number of rows in the table though
> >because what row is going to get "pushed out" of the table? The
> >nifty thing about using a wrapping sequence is that the id's are
> >sequential across transactions, which correctly maps to the
> >progression of time, which obviates the need for relying on any
> >kind of a date column for doing syslog message ordering.
>
> You must have misunderstood my suggestion - you'd still have a wrapping
> sequence, that would generate your ids, and determine which row would be
> pushed out - you would just have *another* sequence *in addition* to
> that, that would count the total number of inserts, so that you would
> know if the wraparound had already happened without having to issue
> another query, which, by the way, is not very robust, because another
> connection may be inserting at the same time, so your update will not
> see the new rowid, but the insert will fail... Keeping two sequences -
> one for id generation, and one for counting solves that.
*nods* Got 'cha.
> Well... first of all, "just UPDATE" is *exactly* the same internally
> as DELETE+INSERT. Secondly, I just don't like having to populate
> the table with millions of empty rows just to keep that 'no logic'
> function happy :-) And finally, nextval % limit hardly qualifies as
> 'logic' to me... and that's really all there is.
True, but in theory, because there will be less latency between the
DELETE+INSERT when using an UPDATE, there will be less resources used.
> >That said, it'd be cool if there was a GLOBAL TEMP table that could
> >be used for this kinda stuff... not having something WAL backed on
> >this kind of an application would make things really scream.
>
> Hmmm... That brings up the question I wanted to ask from the very
> beginning, but just didn't dare :-) What is the reason to throw logs
> into the database in the first place? What's wrong with 'normal'
> syslog? The only difference I can imagine is MVCC, transaction
> consistency, etc... But if you don't care about all that, why not
> just throw things into a file? Now *that* indeed would scream :-)
At Cisco, we had syslog messages coming in from tens of thousands of
machines that generated millions of syslog entries a day. The
messages were dumped into text files and once a minute the data was
loaded and stored in the database. Why a database? *shrug* The only
useful thing that we did was run Perl regexps stored in the DB over
the data and send out custom alerts. I suppose the baby puke colors
for the web interface were aesthetically pleasing to some, but I never
saw the attaction.
You can design to use or not use any one particular data storage
format. What's nice about using a DB for this kind of stuff, however,
is it's easily searchable via SQL, which, can be quite valuable. Data
that isn't accessible/processable, is pretty worthless data. All that
said and done, I use a similar system for my hosting clients. I'll
archive their syslog bits for as long as they'd like me to, provided
they pay for the disk that their bits take up. :)
-sc
--
Sean Chittenden
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Kellerer | 2003-07-16 21:19:37 | Re: Postgresql "FIFO" Tables, How-To ? |
Previous Message | Lynna Landstreet | 2003-07-16 21:11:17 | Anyone used pgHoster.com? |