From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: log chunking broken with large queries under load |
Date: | 2012-04-02 16:44:50 |
Message-ID: | 7284.1333385090@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> On 04/02/2012 12:00 PM, Tom Lane wrote:
>> This seems like it isn't actually fixing the problem, only pushing out
>> the onset of trouble a bit. Should we not replace the fixed-size array
>> with a dynamic data structure?
> But maybe your're right. If we do that and stick with my two-dimensional
> scheme to keep the number of probes per chunk down, we'd need to reorg
> the array every time we increased it. That might be a bit messy, but
> might be ok. Or maybe linearly searching an array of several hundred
> slots for our pid for every log chunk that comes in would be fast enough.
You could do something like having a list of pending chunks for each
value of (pid mod 256). The length of each such list ought to be plenty
short under ordinary circumstances.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | David Johnston | 2012-04-02 16:49:41 | Fwd: [HACKERS] Switching to Homebrew as recommended Mac install? / apology |
Previous Message | Robert Haas | 2012-04-02 16:33:11 | Re: measuring lwlock-related latency spikes |