| From: | Rick Otten <rottenwindfish(at)gmail(dot)com> |
|---|---|
| To: | Jeremy Finzel <finzelj(at)gmail(dot)com> |
| Cc: | "pgsql-performa(dot)" <pgsql-performance(at)postgresql(dot)org> |
| Subject: | Re: Impact of track_activity_query_size on high traffic OLTP system |
| Date: | 2017-04-13 21:17:07 |
| Message-ID: | CAMAYy4LzMe2=b4y1TCHp060ggAaz8RnuYfGuG3aS1HohZiJt0w@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
I always bump it up, but usually just to 4096, because I often have queries
that are longer than 1024 and I'd like to be able to see the full query.
I've never seen any significant memory impact. I suppose if you had
thousands of concurrent queries it would add up, but if you only have a few
dozen, or even a few hundred queries at any given moment - on a modern
system it doesn't seem to impact things very much.
On Thu, Apr 13, 2017 at 4:45 PM, Jeremy Finzel <finzelj(at)gmail(dot)com> wrote:
> I have found some examples of people tweaking this
> parameter track_activity_query_size to various setting such as 4000,
> 10000, 15000, but little discussion as to performance impact on memory
> usage. What I don't have a good sense of is how significant this would be
> for a high traffic system with rapid connection creation/destruction, say
> 1000s per second. In such a case, would there be a reason to hesitate
> raising it to 10000 from 1024? Is 10k memory insignificant? Any direction
> here is much appreciated, including a good way to benchmark this kind of
> thing.
>
> Thanks!
>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Hans Braxmeier | 2017-04-14 23:30:10 | Postgres 9.5 / 9.6: Restoring PG 9.4 dump is very very slow |
| Previous Message | Jeremy Finzel | 2017-04-13 20:45:49 | Impact of track_activity_query_size on high traffic OLTP system |