From: | Greg Stark <stark(at)mit(dot)edu> |
---|---|
To: | otheus uibk <otheus(dot)uibk(at)gmail(dot)com> |
Cc: | Forums postgresql <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Feature request: separate logging |
Date: | 2018-02-27 11:40:04 |
Message-ID: | CAM-w4HNxZMzmdNv0MKeUvg=2ZewRODUaBvMF+epk0ZiR6XrHGg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 18 November 2016 at 13:00, otheus uibk <otheus(dot)uibk(at)gmail(dot)com> wrote:
> What I do today is to configure postgresql to write csvlogs. Stdout/stderr
> are captured by journald. A custom perl script with the Text::CSV module and
> tail -F semantics continuously processes the csvlog file, ignores query,
> dml, and detail log lines, and sends the rest via syslog() (which journald
> then handles).
Postgres supports syslog directly and syslogd supports directing logs
into various destinations. If you want to filter the logs and send
them to different servers you can interpose a remote-syslogd server to
do exactly what you want
I think using a pipe or writing to a file and then having a program
parse the text file back into messages and try to parse the fields out
is fundamentally just huge waste of programmer time and cycles as well
as error prone and susceptible to security problems. Much better to
just have options to have Postgres generate logs in the right format
to begin with and send them with the right protocol to begin with.
> 1. Write to a csvlog with one set of selectors
> 2. Write to stdout/stderr a different set of selectors (no statement, no
> autovacuum, etc)
Being able to send different messages differnent places isn't a bad
idea. But everyone's going to have a different idea of what should go
in which bucket so this will need more thought about the detail.
Perhaps we could get away with just using the error class (the first
two characters of the sql error code, see
src/backend/utils/errcodes.h) but that doesn't help with warnings and
lower level messages. And some of those warnings are pretty important
operational tips like raising checkpoint parameters or autovacuum
parameters.
> 2.1. has the kind of detail contained in the CSV. Currently, the
> log-prefix option does not offer some of the information provided in the CSV
> logs. Really, the CSV log should simply be an implementation of the
> log-prefix.
> 2.2. Collapses multi-lined queries into one line (newlines and tabs
> are escaped with backslashes or the x1B character).
CSV specifies exactly how to handle newlines and quoting and if you're
not happy with that format -- and I would agree with you -- there are
a myriad of other standard formats such as JSON and msgpack. There's
no need to invent an almost-CSV with most of the problems of CSV
except one. One question immediately arises -- how do you plan to
escape the x1B character? (And before you say it's unlikely to appear
in the data consider that one of the main uses for csv logs is to load
them into Postgres so...)
I feel your pain, I'm trying to get logstash or fluentd working here
too and I'm amazed they don't have any correct CSV parser. It seems
like such a basic requirement for something designed to handle logs so
it's quite mysterious to me. Both of them have the same dumpster fire
of a multiline parser that depends on recognizing continuation lines
with a regexp.
> Finally, if these changes can be implemented, is it impossible to backport
> them to prior versions, say 9.1 and up? If I wrote a patch, under what
> conditions would the patch be accepted for inclusion in official releases of
> older versions?
The only way to support older versions would be to publish it
separately as an extension like the jsonlog extension. There's a hook
for logging so it should be possible. But it might not be easy. The
existing jsonlog extension has some quirky bits to deal with messages
at startup for example.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Kazimiers | 2018-02-27 20:08:30 | Re: Unexpected behavior with transition tables in update statement trigger |
Previous Message | mariusz | 2018-02-27 11:26:10 | Re: Given a set of daterange, finding the continuous range that includes a particular date (aggregates) |