From: | Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com> |
---|---|
To: | Rafia Sabih <rafia(dot)sabih(at)enterprisedb(dot)com> |
Cc: | pgsql-bugs <pgsql-bugs(at)postgresql(dot)org>, thom(at)linux(dot)com |
Subject: | Re: log_destination reload/restart doesn't stop file creation |
Date: | 2018-09-14 06:08:42 |
Message-ID: | CAGz5QCJ7qwmaKFwh9MXG2zcRR74jJKPOsdTQA3TFV3LPbBu-sQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Mon, Sep 10, 2018 at 2:56 PM, Rafia Sabih
<rafia(dot)sabih(at)enterprisedb(dot)com> wrote:
>
>
> On Tue, Aug 14, 2018 at 6:02 PM, Thom Brown <thom(at)linux(dot)com> wrote:
>>
>> Hi,
>>
>> I've tested the following on git head. When changing log_destination
>> and reloading, the old destination file continues to be made, just not
>> populated with anything. That means at every file rotation, 2 files
>> are created.
>>
>> For example:
>>
>> log_destination = 'stderr'
>>
>> I get this in my log directory:
>>
>> postgresql-2018-08-14_131640.log
>>
>> If I change it to csvlog and reload, I end up with:
>>
>> 0 -rw------- 1 thom thom 0 Aug 14 13:19 postgresql-2018-08-14_131900.log
>> 4 -rw------- 1 thom thom 194 Aug 14 13:19 postgresql-2018-08-14_131900.csv
>>
>> So I get the csv file, but it's still producing the .log file which
>> remains 0 bytes. The same happens in reverse. (i.e. I end up with an
>>> empty .csv file and a populated .log file).
>>
>> I expect the old file to stop being created.
>>
>> What's also interesting is if I have log_destination set to 'csvlog',
>> then I restart, or stop then start the database manually, I still get
>> 2 files, which contain:
>> Then I continue to get empty .log files:
>> 0 -rw------- 1 thom thom 0 Aug 14 13:26 postgresql-2018-08-14_132600.log
>> 0 -rw------- 1 thom thom 0 Aug 14 13:26 postgresql-2018-08-14_132600.csv
>> 0 -rw------- 1 thom thom 0 Aug 14 13:27 postgresql-2018-08-14_132700.log
>> 0 -rw------- 1 thom thom 0 Aug 14 13:27 postgresql-2018-08-14_132700.csv
>> 0 -rw------- 1 thom thom 0 Aug 14 13:28 postgresql-2018-08-14_132800.log
>> 4 -rw------- 1 thom thom 195 Aug 14 13:28 postgresql-2018-08-14_132800.csv
>> 0 -rw------- 1 thom thom 0 Aug 14 13:29 postgresql-2018-08-14_132900.log
>> 0 -rw------- 1 thom thom 0 Aug 14 13:29 postgresql-2018-08-14_132900.csv
>>
>> This doesn't happen if log_destination is set to 'stderr'.
>>
> Regarding this issue, in logfile_rotate by default a .log file was created,
> I didn't quite get the logic behind it and assume it to be a bug. Hence, in
> the attached patch I have added a check to see if we need to create a log
> file and only then create it.
>
It doesn't look like a bug to me.
If log_destination is set to csv, there are few cases when we create
both csv and logfile.
1. pg_ctl logrotate/ select pg_rotate_logfile: When either of these is
issued, user forces to switch the log. So, both logfiles should be
switched.
2. time based rotation: When we use time-based rotation, it doesn't
matter how much bytes have been written in *.log file. Hence, both
*.log and *.csv should be switched. A lot of external log management
tools (like logrotate, pgBadger) expect this behavior when time-based
rotation is used.
3. size based rotation: In this case, *.log file is switched only when
it exceeds the specified/default log size. So, I don't see any problem
here.
4. During restart. You can find the explanation for the same in
SysLogger_Start(void):
/*
* The initial logfile is created right in the postmaster, to verify that
* the Log_directory is writable. We save the reference time so that the
* syslogger child process can recompute this file name.
*
* It might look a bit strange to re-do this during a syslogger
restart,
* but we must do so since the postmaster closed syslogFile after
the
* previous fork (and remembering that old file wouldn't be right anyway).
* Note we always append here, we won't overwrite any existing file. This
* is consistent with the normal rules, because by definition this is not
* a time-based rotation.
*/
It's important to note that there are few (special) cases, when we
always write to *.log file even log_destination is set to csv. Please
see the following comments in write_syslogger_file(const char *buffer,
int count, int destination).
/*
* If we're told to write to csvlogFile, but it's not open, dump the data
* to syslogFile (which is always open) instead. This can happen
if CSV
* output is enabled after postmaster start and we've been unable to open
* csvlogFile. There are also race conditions during a parameter
change
* whereby backends might send us CSV output before we open csvlogFile or
* after we close it. Writing CSV-formatted output to the regular
log
* file isn't great, but it beats dropping log output on the
floor.
*
* Think not to improve this by trying to open csvlogFile on-the-fly. Any
* failure in that would lead to recursion.
*/
To test, we can kill the postmaster while logger process is running
and set log level to debug5. The logger shutting down debug messages
will be redirected to *.log file.
Please let me know if you think otherwise.
--
Thanks & Regards,
Kuntal Ghosh
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | André Hänsel | 2018-09-14 08:15:48 | AW: BUG #15373: null / utf-8 |
Previous Message | Mareks Kalnačs | 2018-09-14 05:59:50 | RE: PostgreSQL 10.0 SELECT LIMIT performance problem |