Re: How to investigate what postgres is spending time on

From: Chris Travers <chris(dot)travers(at)gmail(dot)com>
To: krc(at)hiper(dot)dk
Cc: Postgres General <pgsql-general(at)postgresql(dot)org>
Subject: Re: How to investigate what postgres is spending time on
Date: 2018-09-23 13:26:15
Message-ID: CAKt_ZfvFodk2LTYPKYH1Rr+gv_zqhSGxGU1YY0mSp8c9Ap2F_w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Sun, Sep 23, 2018 at 1:15 PM Kim Rose Carlsen <krc(at)hiper(dot)dk> wrote:

> Hi
>
>
> I have some simple INSERT / UPDATE queries, that takes a long time the
> first time they are run in out test environment, but I'm not sure what
> postgres is doing and what I can do to help it. Whats common is that the
> table contains many rows in the order of about 20 millions.
>
>
> Query:
>
> INSERT INTO communication.request_parameter (request_id,
> template_version_parameter_id, parameter_value)
> VALUES (1222, 1211, 122) RETURNING request_parameter_id
>
> Row from pg_stat_statements:
> -----------+------------+------------+--------------------------------------------------------------+------------+------------------+------------+------------+--------------------+--------------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+-
> userid | dbid | queryid | query | calls | total_time | min_time | max_time | mean_time | stddev_time | rows | shared_blk | shared_blk | shared_blk | shared_blk | local_blks | local_blks | local_blks | local_blks | temp_blks_ | temp_blks_ | blk_read_t | blk_write_ |
> -----------+------------+------------+--------------------------------------------------------------+------------+------------------+------------+------------+--------------------+--------------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+-
> 16385 | 16389 | 2064198912 | INSERT INTO communication.request_parameter (request_id, tem | 98 | 646.393451 | 0.036666 | 638.712758 | 6.59585154081633 | 64.1818799227704 | 98 | 2850 | 24 | 21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
>
> Description of table:
> # \d communication.request_parameter
> Table
> "communication.request_parameter"
> Column | Type | Collation | Nullable
> |
> Default
>
> -------------------------------+-------------------+-----------+----------+-------------------------------------------------------------------------------
> request_parameter_id | integer | | not null
> |
> nextval('communication.request_parameter_request_parameter_id_seq'::regclass)
> request_id | integer | | not null
> |
> template_version_parameter_id | integer | | not null
> |
> parameter_value | character varying | |
> |
> Indexes:
> "request_parameter_pkey" PRIMARY KEY, btree (request_parameter_id)
> "request_parameter_parameter_value_idx" btree (parameter_value)
> "request_parameter_request_id_idx" btree (request_id)
> "request_parameter_template_version_parameter_id_idx" btree
> (template_version_parameter_id)
> Foreign-key constraints:
> "request_parameter_request_id_fkey" FOREIGN KEY (request_id)
> REFERENCES communication.request(request_id)
> "request_parameter_template_version_parameter_id_fkey" FOREIGN KEY
> (template_version_parameter_id) REFERENCES
> communication.template_version_parameter(template_version_parameter_id)
>
> This only happens in testing, and on a cold bootet database. The test
> database is constructed with pg_dump and restore on fresh postgres
> installation.
>

Sounds like warming up the cache, but still in a test environment you may
want to add auto_explain to your list of preloads and perhaps set it to
dump explain analyze when it hits a certain threshold. Note that while
dumping the query plans has very little overhead, timing the query plan
nodes does impact performance in a negative way.

>
>
> Best Regards
> Kim Carlsen
>
>

--
Best Wishes,
Chris Travers

Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor
lock-in.
http://www.efficito.com/learn_more

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2018-09-23 14:35:39 Re: PostgreSQl, PHP and IIS
Previous Message Kim Rose Carlsen 2018-09-23 11:15:36 How to investigate what postgres is spending time on