From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Achilleas Mantzios <achill(at)matrix(dot)gatewaynet(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: syslog performance when logging big statements |
Date: | 2008-07-08 18:34:01 |
Message-ID: | 12596.1215542041@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Achilleas Mantzios <achill(at)matrix(dot)gatewaynet(dot)com> writes:
> Tuesday 08 July 2008 17:35:16 / Tom Lane :
>> Hmm. There's a function in elog.c that breaks log messages into chunks
>> for syslog. I don't think anyone's ever looked hard at its performance
>> --- maybe there's an O(N^2) behavior?
> Thanx,
> i changed PG_SYSLOG_LIMIT in elog.c:1269 from 128 to 1048576
> and i got super fast stderr performance. :)
Doesn't seem like a very good solution given its impact on the stack
depth right there.
Looking at the code, the only bit that looks like repeated work are the
repeated calls to strchr(), which would not be an issue in the "typical"
case where the very long message contains reasonably frequent newlines.
Am I right in guessing that your problematic statement contained
megabytes worth of text with nary a newline?
If so, we can certainly fix it by arranging to remember the last
strchr() result across loop iterations, but I'd like to confirm the
theory before doing that work.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff | 2008-07-08 19:00:06 | Re: syslog performance when logging big statements |
Previous Message | Scott Carey | 2008-07-08 16:38:39 | Re: Fusion-io ioDrive |