From: | Terry <td3201(at)gmail(dot)com> |
---|---|
To: | Andy Colson <andy(at)squeakycode(dot)net> |
Cc: | PostgreSQL <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: data dump help |
Date: | 2010-01-18 23:07:42 |
Message-ID: | 8ee061011001181507q2c677873vca77d3e6fb5afaf6@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Jan 18, 2010 at 4:48 PM, Andy Colson <andy(at)squeakycode(dot)net> wrote:
> On 1/18/2010 4:08 PM, Terry wrote:
>>
>> Hello,
>>
>> Sorry for the poor subject. Not sure how to describe what I need
>> here. I have an application that logs to a single table in pgsql.
>> In order for me to get into our log management, I need to dump it out
>> to a file on a periodic basis to get new logs. I am not sure how to
>> tackle this. I thought about doing a date calculation and just
>> grabbing the previous 6 hours of logs and writing that to a new log
>> file and setting up a rotation like that. Unfortunately, the log
>> management solution can't go into pgsql directly. Thoughts?
>>
>> Thanks!
>>
>
> How about a flag in the db, like: dumped.
>
> inside one transactions you'd be safe doing:
>
> begin
> SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
> select * from log where dumped = 0;
> -- app code to format/write/etc
> update log set dumped = 1 where dumped = 0;
> commit;
>
> Even if other transactions insert new records, you're existing transaction
> wont see them, and the update wont touch them.
>
> -Andy
>
I like your thinking but I shouldn't add a new column to this
database. It's a 3rd party application.
From | Date | Subject | |
---|---|---|---|
Next Message | Terry | 2010-01-18 23:49:32 | Re: data dump help |
Previous Message | Andy Colson | 2010-01-18 22:48:29 | Re: data dump help |