| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
| Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: log chunking broken with large queries under load |
| Date: | 2012-04-02 16:00:14 |
| Message-ID: | 6290.1333382414@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> On 04/01/2012 06:34 PM, Andrew Dunstan wrote:
>> Some of my PostgreSQL Experts colleagues have been complaining to me
>> that servers under load with very large queries cause CSV log files
>> that are corrupted,
> We could just increase CHUNK_SLOTS in syslogger.c, but I opted instead
> to stripe the slots with a two dimensional array, so we didn't have to
> search a larger number of slots for any given message. See the attached
> patch.
This seems like it isn't actually fixing the problem, only pushing out
the onset of trouble a bit. Should we not replace the fixed-size array
with a dynamic data structure?
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Jay Levitt | 2012-04-02 16:02:49 | Re: Switching to Homebrew as recommended Mac install? / apology |
| Previous Message | Andrew Dunstan | 2012-04-02 14:30:47 | Re: log chunking broken with large queries under load |