From: | Dmitry Tkach <dmitry(at)openratings(dot)com> |
---|---|
To: | Sean Chittenden <sean(at)chittenden(dot)org> |
Cc: | Andrew Sullivan <andrew(at)libertyrms(dot)info>, pg_general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Postgresql "FIFO" Tables, How-To ? |
Date: | 2003-07-16 20:58:02 |
Message-ID: | 3F15BC5A.3000006@openratings.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>
>
>That doesn't help you limit the number of rows in the table though
>because what row is going to get "pushed out" of the table? The nifty
>thing about using a wrapping sequence is that the id's are sequential
>across transactions, which correctly maps to the progression of time,
>which obviates the need for relying on any kind of a date column for
>doing syslog message ordering.
>
You must have misunderstood my suggestion - you'd still have a wrapping
sequence, that would generate your ids, and determine which row would be
pushed out - you would just have *another* sequence *in addition* to
that, that would count the total number of inserts, so that you would
know if the wraparound had already happened without having to issue
another query, which, by the way, is not very robust, because another
connection may be inserting at the same time, so your update will not
see the new rowid, but the insert will fail... Keeping two sequences -
one for id generation, and one for counting solves that.
>
>
>
>>... or you could do it with the single sequence still, if you get
>>rid of the limit and cycle, and just do if nextval >= limit then
>>update ... where id = nextval % limit else insert
>>
>>
>
>There's no logic necessary with the above function. You get the next
>val in the sequence and update the ID. No need to delete, no nothing
>other than VACUUM to remove the dead tuples, which, you have to do
>regardless of the method chosen. A DELETE + UPDATE is just a waste of
>CPU and IO with the table being MVCC backed. Just UPDATE and be done
>with it.
>
Well... first of all, "just UPDATE" is *exactly* the same internally as
DELETE+INSERT.
Secondly, I just don't like having to populate the table with millions
of empty rows just to keep that 'no logic' function happy :-)
And finally, nextval % limit hardly qualifies as 'logic' to me... and
that's really all there is.
>
>That said, it'd be cool if there was a GLOBAL TEMP table that could be
>used for this kinda stuff... not having something WAL backed on this
>kind of an application would make things really scream.
>
>
Hmmm... That brings up the question I wanted to ask from the very
beginning, but just didn't dare :-)
What is the reason to throw logs into the database in the first place?
What's wrong with 'normal' syslog?
The only difference I can imagine is MVCC, transaction consistency, etc...
But if you don't care about all that, why not just throw things into a
file? Now *that* indeed would scream :-)
Dima
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-07-16 20:58:58 | Re: ERROR: Cache lookup failed for relation 5741599 |
Previous Message | Tom Lane | 2003-07-16 20:56:58 | Re: why can't I find the other schemas in my restored database except public schemas ? |