From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | Robert Treat <rob(at)xzilla(dot)net> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Nikolay Samokhvalov <samokhvalov(at)gmail(dot)com>, Michael Paquier <michael(at)paquier(dot)xyz>, Bruce Momjian <bruce(at)momjian(dot)us>, Alexey Kondratov <a(dot)kondratov(at)postgrespro(dot)ru>, v(dot)makarov(at)postgrespro(dot)ru, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: [PATCH] Increase the maximum value track_activity_query_size |
Date: | 2019-12-30 21:33:42 |
Message-ID: | 20191230213342.jmtvp2kmqdtt55sp@development |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Dec 30, 2019 at 12:46:40PM -0800, Robert Treat wrote:
>On Mon, Dec 23, 2019 at 9:11 PM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>>
>> On Sat, Dec 21, 2019 at 1:25 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> > > What is the overhead here except the memory consumption?
>> >
>> > The time to copy those strings out of shared storage, any time
>> > you query pg_stat_activity.
>>
>> It seems like you're masterminding this, and I don't know why. It
>> seems unlikely that anyone will raise the value unless they have very
>> long queries, and if those people would rather pay the overhead of
>> copying more data than have their queries truncated, who are we to
>> argue?
>>
>> If increasing the maximum imposed some noticeable cost on
>> installations the kept the default setting, that might well be a good
>> argument for not raising the maximum. But I don't think that's the
>> case. I also suspect that the overhead would be pretty darn small even
>> for people who *do* raise the default setting. It looks to me like
>> both reading and write operations on st_activity_raw stop when they
>> hit a NUL byte, so any performance costs on short queries must come
>> from second-order effects (e.g. the main shared memory segment is
>> bigger, so the OS cache is smaller) which are likely irrelevant in
>> practice.
>>
>
>I'm generally in favor of the idea of allowing people to make
>trade-offs that best work for them, but Tom's concern does give me
>pause, because it isn't clear to me how people will measure the
>overhead of upping this setting. If given the option people will
>almost certainly start raising this limit because the benefits are
>obvious ("I can see all my query now!") but so far the explanation of
>the downsides have been either hand-wavy or, in the case of your
>second paragraph, an argument they are non-existent, which doesn't
>seem right either; so how do we explain to people how to measure the
>overhead for them?
>
I think there are two questions that we need to answer:
1) Does allowing higher values for the GUC mean overhead for people who
don't actually increase it?
I don't think so.
2) What's the overhead for increasing the value for short/long queries?
My assumption is that for short queries, it's going to be negligible.
For longer queries it may be measurable, but I'd expect longer queries
to be more expensive in general, so maybe it's still negligible.
Of course, the easiest thing we can do is actually measuring this.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2019-12-30 22:40:31 | Re: Building infrastructure for B-Tree deduplication that recognizes when opclass equality is also equivalence |
Previous Message | Peter Eisentraut | 2019-12-30 21:25:30 | Re: Allow an alias to be attached directly to a JOIN ... USING |