From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Juan José Santamaría Flecha <juanjo(dot)santamaria(at)gmail(dot)com> |
Cc: | egashira(dot)yusuke(at)fujitsu(dot)com, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: BUG #17254: Crash with 0xC0000409 in pg_stat_statements when pg_stat_tmp\pgss_query_texts.stat exceeded 2GB. |
Date: | 2021-10-30 20:50:48 |
Message-ID: | 856857.1635627048@sss.pgh.pa.us |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo(dot)santamaria(at)gmail(dot)com> writes:
> On Sat, Oct 30, 2021 at 6:26 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I think instead, we need to turn the subsequent one-off read() call into a
>> loop that reads no more than INT_MAX bytes at a time. It'd be possible
>> to restrict that to Windows, but probably no harm in doing it the same
>> way everywhere.
> Seems reasonable to me, can such a change be back-patched?
Don't see why not.
>> A different line of thought is that maybe we shouldn't be letting the
>> file get so big in the first place. Letting every backend have its
>> own copy of a multi-gigabyte stats file is going to be problematic,
>> and not only on Windows. It looks like the existing logic just considers
>> the number of hash table entries, not their size ... should we rearrange
>> things to keep a running count of the space used?
> +1. There should be a mechanism to limit the effective memory size.
This, on the other hand, would likely be something for HEAD only.
But now that we've seen a field complaint, it seems like a good
thing to pursue.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2021-10-30 21:39:48 | Re: BUG #17255: Server crashes in index_delete_sort_cmp() due to race condition with vacuum |
Previous Message | Andres Freund | 2021-10-30 20:42:46 | Re: BUG #17245: Index corruption involving deduplicated entries |