From: | Terry <td3201(at)gmail(dot)com> |
---|---|
To: | Andy Colson <andy(at)squeakycode(dot)net> |
Cc: | PostgreSQL <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: data dump help |
Date: | 2010-01-18 23:49:32 |
Message-ID: | 8ee061011001181549q3c990f40uf1066a768fe3e071@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Jan 18, 2010 at 5:07 PM, Terry <td3201(at)gmail(dot)com> wrote:
> On Mon, Jan 18, 2010 at 4:48 PM, Andy Colson <andy(at)squeakycode(dot)net> wrote:
>> On 1/18/2010 4:08 PM, Terry wrote:
>>>
>>> Hello,
>>>
>>> Sorry for the poor subject. Not sure how to describe what I need
>>> here. I have an application that logs to a single table in pgsql.
>>> In order for me to get into our log management, I need to dump it out
>>> to a file on a periodic basis to get new logs. I am not sure how to
>>> tackle this. I thought about doing a date calculation and just
>>> grabbing the previous 6 hours of logs and writing that to a new log
>>> file and setting up a rotation like that. Unfortunately, the log
>>> management solution can't go into pgsql directly. Thoughts?
>>>
>>> Thanks!
>>>
>>
>> How about a flag in the db, like: dumped.
>>
>> inside one transactions you'd be safe doing:
>>
>> begin
>> SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
>> select * from log where dumped = 0;
>> -- app code to format/write/etc
>> update log set dumped = 1 where dumped = 0;
>> commit;
>>
>> Even if other transactions insert new records, you're existing transaction
>> wont see them, and the update wont touch them.
>>
>> -Andy
>>
>
> I like your thinking but I shouldn't add a new column to this
> database. It's a 3rd party application.
>
Although. I really like your idea so I might create another table
where I will log whether the data has been dumped or not. I just need
to come up with a query to check this with the other table.
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2010-01-19 01:29:33 | Re: number of page slots needed (1576544) exceeds max_fsm_pages (204800)] |
Previous Message | Terry | 2010-01-18 23:07:42 | Re: data dump help |