Re: Logging: stderr vs syslog?

From: Evan Rempel <erempel(at)uvic(dot)ca>
To: pgsql-admin(at)postgresql(dot)org
Subject: Re: Logging: stderr vs syslog?
Date: 2017-08-04 19:49:32
Message-ID: da1673fa-df23-937b-69ad-c6290ff0a607@uvic.ca
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

I have been a systems administrator of PostgreSQL since version 7.0. I
am also a primary logging architect for a 4000+ device network
infrastructure and SEIM alerting infrastructure architect.

Syslog is by far the best approach to logging. With the right product it
is way FASTER than stderr and there are a ton of tools to parse,
analyze, view and report on syslog streams.

There is one caveat with PostgreSQL version <= 9.5 and that is that
syslog messages are wrapped at approx 80 characters which makes parsing
and error detection problematic. pgBadger may address this limitation by
unwrapping such logs messages whereas generic log parsing engines do not
have any specialized knowledge about how PostgreSQL lines might be
wrapped. PostgreSQL version <= 9.5 this is a compile time only option
but starting in 9.6 this is a runtime configuration directive.

There are two strong reasons for using syslog.

1. In a well architected logging solution the syslog process on the host
will also send the log messages to a central log server. This means that
if the database server is compromised leading to an untrusted set of log
files, there is a trusted copy of the logs on another server.

2. When running a high availability or clustered database all of the
logs can be aggregated to a central log server which places all of the
logs from all of the database servers into one easy to
read/parse/process location.

I hope this provides some rationale for using syslog.

Evan.

On 08/04/2017 09:26 AM, Don Seiler wrote:
> I've just inherited a few PostgreSQL DBs, having come from Oracle
> land. I'm looking to shore up the logging situation. Right now we use
> stderr logging and they get rotated based on size threshold. I'd like
> for those old logs to be gzipped so we can keep more on disk rather
> than current method of just deleting old logs to free up space. This
> is mostly on pgsql 9.2 with a couple of 9.3, but I'm planning to
> upgrade everything to 9.6.3 when I get my feet on solid ground.
>
> Couple of question around this:
>
> 1. I thought logrotate would be a no-brainer here, but it sounds like
> I should then change to use syslog rather than stderr. I've read
> some caveats around syslog needing to sync files and potentially
> slow things down. I'm wondering if any grizzled production
> postgres veterans could offer up their experience.
> 2. Alternatively I could just keep it going with stderr and have a
> separate script run find/gzip on log files beyond a certain mtime
> threshold. This would probably be the quickest to implement, but
> I'd much rather use logrotate facilities if there are no strong
> opinions against using syslog.
>
> Thanks in advance for your time, I'm sure I'll be making a lot of us
> of these mailing lists in the not-too-distant future.
>
> Don.
>
> --
> Don Seiler
> www.seiler.us <http://www.seiler.us>

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Don Seiler 2017-08-05 05:22:15 Re: Logging: stderr vs syslog?
Previous Message Scott Mead 2017-08-04 18:02:03 Re: Logging: stderr vs syslog?