From: | "Simon Riggs" <simon(at)2ndquadrant(dot)com> |
---|---|
To: | "Gregory Stark" <stark(at)enterprisedb(dot)com> |
Cc: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Dave Page" <dpage(at)postgresql(dot)org>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Configurable Additional Stats |
Date: | 2007-07-02 17:38:52 |
Message-ID: | 1183397932.4146.13.camel@silverbirch.site |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 2007-07-02 at 17:41 +0100, Gregory Stark wrote:
> "Simon Riggs" <simon(at)2ndquadrant(dot)com> writes:
>
> > 2) Charge-back accounting. Keep track by userid, user group, time of
> > access etc of all accesses to the system, so we can provide chargeback
> > facilities to users. You can put your charging rules into the plugin and
> > have it spit out appropriate chargeback log records, when/if required.
> > e.g. log a chargeback record every time a transaction touches > 100
> > blocks, to keep track of heavy queries but ignore OLTP workloads.
>
> Sure, but I think Tom's question is how do you get from the plugin to wherever
> you want this data to be? There's not much you can do with the data at that
> point. You would end up having to reconstruct the entire stats collector
> infrastructure to ship the data you want out via some communication channel
> and then aggregate it somewhere else.
I just want to LOG a few extra pieces of information in this simplest
possible way, <sigh/>
There are no more steps in that process than there are for using
log_min_duration_statement and a performance analysis tool.
Outside-the-dbms processing is already required to use PostgreSQL
effectively, so this can't be an argument against the logging of
additional stats. Logging to the dbms means we have to change table
definitions etc, which will ultimately not work as well.
> Perhaps your plugin entry point is most useful *alongside* my stats-domain
> idea. If you wanted to you could write a plugin which set the stats domain
> based on whatever criteria you want whether that's time-of-day, userid, load
> on the system, etc.
Your stats domain idea is great, but it doesn't solve my problem (1). I
don't want this solved, I *need* it solved, since there's no other way
to get this done accurately with a large and complex application.
We could just go back to having
log_tables_in_transaction = on | off
which would produce output like this:
LOG: transaction-id: 3456 table list {32456, 37456, 85345, 19436}
I don't expect everybody to like that, but its what I want, so I'm
proposing it in a way that is more acceptable. If somebody has a better
way of doing this, please say. The plugin looks pretty darn simple to
me... and hurts nobody.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Eric | 2007-07-02 17:44:55 | Re: GiST consistent function, expected arguments; multi-dimensional indexes |
Previous Message | Joshua D. Drake | 2007-07-02 16:59:10 | Re: Postgresql.conf cleanup |