From: | Michael Fuhr <mike(at)fuhr(dot)org> |
---|---|
To: | hubert lubaczewski <hubert(dot)lubaczewski(at)eo(dot)pl> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: profiling postgresql queries? |
Date: | 2005-04-12 14:43:59 |
Message-ID: | 20050412144359.GA88387@winnie.fuhr.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Apr 12, 2005 at 12:46:43PM +0200, hubert lubaczewski wrote:
>
> the problem is that both the inserts and updated operate on
> heavy-tirggered tables.
> and it made me wonder - is there a way to tell how much time of backend
> was spent on triggers, index updates and so on?
> like:
> total query time: 1 secons
> trigger a: 0.50 second
> trigger b: 0.25 second
> index update: 0.1 second
EXPLAIN ANALYZE in 8.1devel (CVS HEAD) prints a few statistics for
triggers:
EXPLAIN ANALYZE UPDATE foo SET x = 10 WHERE x = 20;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
Index Scan using foo_x_idx on foo (cost=0.00..14.44 rows=10 width=22) (actual time=0.184..0.551 rows=7 loops=1)
Index Cond: (x = 20)
Trigger row_trig1: time=1.625 calls=7
Trigger row_trig2: time=1.346 calls=7
Trigger stmt_trig1: time=1.436 calls=1
Total runtime: 9.659 ms
(6 rows)
8.1devel changes frequently (sometimes requiring initdb) and isn't
suitable for production, but if the trigger statistics would be
helpful then you could set up a test server and load a copy of your
database into it. Just beware that because it's bleeding edge, it
might destroy your data and it might behave differently than released
versions.
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
From | Date | Subject | |
---|---|---|---|
Next Message | hubert lubaczewski | 2005-04-12 14:45:57 | Re: profiling postgresql queries? |
Previous Message | Tom Lane | 2005-04-12 14:35:40 | Re: Slow update |