From: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
---|---|
To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
Cc: | Postgresql Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Feature request -- Log Database Name |
Date: | 2003-07-30 18:45:50 |
Message-ID: | 200307301845.h6UIjoE25918@candle.pha.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
One idea would be to output log information as INSERT statements, so we
could log connection/dbname/username to one table, and per-session
information to another table, and server-level info in a third table.
If you want to analyze the logs, you could load the data into a database
via inserts, and even do joins and analyze the output using SQL!
This would solve the problem of failed transactions exporting
information, would not be extra overhead for every log message, and
would handle the problem of analyzing the log tables while the system
was running and continuing to emit more log output.
---------------------------------------------------------------------------
Andrew Dunstan wrote:
> There seem to be 2 orthogonal issues here - in effect how to log and
> where to log. I had a brief look and providing an option to log the
> dbname where appropriate seems to be quite easy - unless someone else is
> already doing it I will look at it on the weekend. Assuming that were
> done you could split the log based on dbname.
>
> For the reasons Tom gives, logging to a table looks much harder and
> possibly undesirable - I would normally want my log table(s) in a
> different database, possibly even on a different machine, from my
> production transactional database. However, an ISP might want to provide
> the logs for each client in their designated db. It therefore seems to
> me far more sensible to do load logs into tables out of band as Tom
> suggests, possibly with some helper tools in contrib to parse the logs,
> or even to load them in more or less real time (many tools exist to do
> this sort of thing for web logs, so it is hardly rocket science -
> classic case for a perl script ;-).
>
> cheers
>
> andrew
>
>
> ohp(at)pyrenet(dot)fr wrote:
>
> >On Mon, 28 Jul 2003, Tom Lane wrote:
> >
> >
> >
> >>Date: Mon, 28 Jul 2003 21:39:23 -0400
> >>From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
> >>To: Robert Treat <xzilla(at)users(dot)sourceforge(dot)net>
> >>Cc: ohp(at)pyrenet(dot)fr, Larry Rosenman <ler(at)lerctr(dot)org>,
> >> Josh Berkus <josh(at)agliodbs(dot)com>,
> >> pgsql-hackers list <pgsql-hackers(at)postgresql(dot)org>
> >>Subject: Re: [HACKERS] Feature request -- Log Database Name
> >>
> >>Robert Treat <xzilla(at)users(dot)sourceforge(dot)net> writes:
> >>
> >>
> >>>I think better would be a GUC "log_to_table" which wrote all standard
> >>>out/err to a pg_log table. of course, I doubt you could make this
> >>>foolproof (how to log startup errors in this table?) but it could be a
> >>>start.
> >>>
> >>>
> >>How would a failed transaction make any entries in such a table? How
> >>would you handle maintenance operations on the table that require
> >>exclusive lock? (vacuum full, reindex, etc)
> >>
> >>It seems possible that you could make this work if you piped stderr to a
> >>buffering process that was itself a database client, and issued INSERTs
> >>to put the rows into the table, and could buffer pending data whenever
> >>someone else had the table locked (eg for vacuum). I'd not care to try
> >>to get backends to do it locally.
> >>
> >> regards, tom lane
> >>
> >>
> >Not quite, my goal is to have a log per database, the stderr dosn't
> >contain enough information to split it.
> >
> >As an ISP, I would like that each customer having one or more databases
> >being able to see any error on their database.
> >I imagine have a log file per database would be toot complicated...
> >
> >
> >
> >
> >
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>
--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-07-30 19:03:48 | Re: array expression NULL fix [was: [HACKERS] odd behavior/possible bug] |
Previous Message | Tom Lane | 2003-07-30 18:45:31 | pkglibdir versus libdir? |