From: | Steve Crawford <scrawford(at)pinpointresearch(dot)com> |
---|---|
To: | Jeff Boes <jboes(at)nexcerpt(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: continuous data from stdin |
Date: | 2003-02-19 16:31:21 |
Message-ID: | 20030219163121.7AA59103BD@polaris.pinpointresearch.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
If you can route the stuff to syslog, consider the modular syslog daemon
(msyslogd). It has a Postgresql output module that puts the log data directly
into a database. I also have modified the module for one specific log stream
to parse out additional fields for entry into the database.
It has been quite robust (I have my syslog->postgres connection up for months
at a time and even then I'm only breaking the connection due to restarting
msyslogd after code tweaking).
Cheers,
Steve
On Tuesday 18 February 2003 10:34 am, Jeff Boes wrote:
> On Tue, 18 Feb 2003 13:09:45 -0500, ÷ÅÒÅÔÅÎÎÉËÏ× áÌÅËÓÅÊ wrote:
> > I think it's a common question, though I couldn't find any information
> > aboo= ut it so far. I've got a program that puts its logs into stdout.
> > What is th= e best solution to insert these logs into a table as they
> > occur?
>
> I think you're going to have to write a program (Perl using DBI would be
> my choice) to use this output as input, and do INSERT statements as each
> record is received. (The COPY command in SQL won't do the trick, if
> that's what you're thinking of.)
From | Date | Subject | |
---|---|---|---|
Next Message | Jonathan Bartlett | 2003-02-19 16:31:30 | Re: Table Partitioning in Postgres: |
Previous Message | Tom Lane | 2003-02-19 16:28:13 | Re: 7.3.1 takes long time to vacuum table? |