From: | Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: New SQL counter statistics view (pg_stat_sql) |
Date: | 2016-10-19 00:36:11 |
Message-ID: | CAJrrPGekJcsUjW5x09UsTQGwqvdwcfimha-femuqMUOtz1NzXA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Oct 19, 2016 at 5:11 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Thu, Sep 29, 2016 at 1:45 AM, Haribabu Kommi
> <kommi(dot)haribabu(at)gmail(dot)com> wrote:
> > Currently, The SQL stats is a fixed size counter to track the all the
> ALTER
> > cases as single counter. So while sending the stats from the backend to
> > stats collector at the end of the transaction, the cost is same, because
> of
> > it's fixed size. This approach adds overhead to send and read the stats
> > is minimal.
> >
> > With the following approach, I feel it is possible to support the
> counter at
> > command tag level.
> >
> > Add a Global and local Hash to keep track of the counters by using the
> > command tag as the key, this hash table increases dynamically whenever
> > a new type of SQL command gets executed. The Local Hash data is passed
> > to stats collector whenever the transaction gets committed.
> >
> > The problem I am thinking is that, Sending data from Hash and populating
> > the Hash from stats file for all the command tags adds some overhead.
>
> Yeah, I'm not very excited about that overhead. This seems useful as
> far as it goes, but I don't really want to incur measurable overhead
> when it's in use. Having a hash table rather than a fixed array of
> slots means that you have to pass this through the stats collector
> rather than updating shared memory directly, which is fairly heavy
> weight. If each backend could have its own copy of the slot array and
> just update that, and readers added up the values across the whole
> array, this could be done without any locking at all, and it would
> generally be much lighter-weight than this approach.
Using limited information like combining all ALTER XXX into Alter counter,
the fixed array logic works fine without having any problems. But people
are suggesting to provide more details like ALTER VIEW and etc, so I
checked the following ways,
1. Using of nodetag as an array of index to update the counter. But this
approach doesn't work for some cases like T_DropStmt where the tag
varies based on the removeType of the DropStmt. So I decide to drop
this approach.
2. Using of tag name for searching the fixed size array that is sorted with
all command tags that are possible in PostgreSQL. Using a binary search,
find out the location and update the counter. In this approach, first the
array
needs to be filed with TAG information in the sorted order.
3. Using of Hash table to store the counters information based on the TAG
name as key.
I choose the approach-3 as it can scale for any additional commands.
Any better alternatives with that directly it can point to the array index?
if we are going with ALTER XXX into single Alter counter is approach, then
I will change the code with a fixed array approach.
Regards,
Hari Babu
Fujitsu Australia
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2016-10-19 01:18:45 | Re: Query cancel seems to be broken in master since Oct 17 |
Previous Message | Amit Kapila | 2016-10-19 00:27:50 | Re: Hash Indexes |