From: | Andy Colson <andy(at)squeakycode(dot)net> |
---|---|
To: | Terry <td3201(at)gmail(dot)com>, PostgreSQL <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: data dump help |
Date: | 2010-01-18 22:48:29 |
Message-ID: | 4B54E53D.4030203@squeakycode.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 1/18/2010 4:08 PM, Terry wrote:
> Hello,
>
> Sorry for the poor subject. Not sure how to describe what I need
> here. I have an application that logs to a single table in pgsql.
> In order for me to get into our log management, I need to dump it out
> to a file on a periodic basis to get new logs. I am not sure how to
> tackle this. I thought about doing a date calculation and just
> grabbing the previous 6 hours of logs and writing that to a new log
> file and setting up a rotation like that. Unfortunately, the log
> management solution can't go into pgsql directly. Thoughts?
>
> Thanks!
>
How about a flag in the db, like: dumped.
inside one transactions you'd be safe doing:
begin
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
select * from log where dumped = 0;
-- app code to format/write/etc
update log set dumped = 1 where dumped = 0;
commit;
Even if other transactions insert new records, you're existing
transaction wont see them, and the update wont touch them.
-Andy
From | Date | Subject | |
---|---|---|---|
Next Message | Terry | 2010-01-18 23:07:42 | Re: data dump help |
Previous Message | Jean-Yves F. Barbier | 2010-01-18 22:09:04 | Re: type of field |