From: | Daniel Gustafsson <daniel(at)yesql(dot)se> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Cluster::restart dumping logs when stop fails |
Date: | 2024-04-07 16:51:40 |
Message-ID: | 6740DD2B-81C3-4B49-A359-7D592970FEB9@yesql.se |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> On 7 Apr 2024, at 18:28, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> On 2024-04-07 16:52:05 +0200, Daniel Gustafsson wrote:
>>> On 7 Apr 2024, at 14:51, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
>>> On 2024-04-06 Sa 20:49, Andres Freund wrote:
>>
>>>> That's probably unnecessary optimization, but it seems a tad silly to read an
>>>> entire, potentially sizable, file to just use the last 1k. Not sure if the way
>>>> slurp_file() uses seek supports negative ofsets, the docs read to me like that
>>>> may only be supported with SEEK_END.
>>>
>>> We should enhance slurp_file() so it uses SEEK_END if the offset is negative.
>>
>> Absolutely agree. Reading the thread I think Andres argues for not printing
>> anything at all in this case but we should support negative offsets anyways, it
>> will fort sure come in handy.
>
> I'm ok with printing path + some content or just the path.
I think printing the last 512 bytes or so would be a good approach, I'll take
care of it later tonight. That would be a backpatchable change IMHO.
--
Daniel Gustafsson
From | Date | Subject | |
---|---|---|---|
Next Message | Alexander Lakhin | 2024-04-07 17:00:00 | Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column ordered scans, skip scan |
Previous Message | jian he | 2024-04-07 16:34:58 | Re: remaining sql/json patches |