From: | Decibel! <decibel(at)decibel(dot)org> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | David Miller <miller392(at)yahoo(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Backend Stats Enhancement Request |
Date: | 2008-06-20 01:03:03 |
Message-ID: | D9F8AB88-47E8-4190-BF3F-616A17C879D1@decibel.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Jun 19, 2008, at 10:26 AM, Alvaro Herrera wrote:
> David Miller wrote:
>
>> That is fine.. Maybe a dynamic configurable parameter that can be
>> set/updated while the database is running.
>
> If it were a parameter, it could not be changed while the database is
> running.
>
>> This issue lies in the fact that we have queries larger than 1K
>> and we
>> would like to be able to capture the entire query from Postgres
>> Studio
>> without having to process the log files..
>
> Have you considered using CSV logs instead? Should be easier to
> process.
Would it be hard to have a backend write it's complete command out to
a file if the command lasts more than X number of seconds, and then
allow other backends to read it from there? It is extremely annoying
to not be able to get the full query contents.
Also, I don't necessarily buy that 32k * max_connections is too much
shared memory; even with max_connections of 1000 that's only 32M,
which is trivial for any box that's actually configured for 1000
connections.
--
Decibel!, aka Jim C. Nasby, Database Architect decibel(at)decibel(dot)org
Give your computer some brain candy! www.distributed.net Team #1828
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2008-06-20 03:17:34 | Re: Backend Stats Enhancement Request |
Previous Message | Greg Smith | 2008-06-19 23:19:46 | Re: posix advises ... |