From: | "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com> |
---|---|
To: | Shubham Mittal <mittalshubham30(at)gmail(dot)com> |
Cc: | PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Query takes around 15 to 20 min over 20Lakh rows |
Date: | 2021-09-02 23:08:52 |
Message-ID: | CAKFQuwZecBri-p0Zh-Fn8-1T6FNPH94wPYXOZ3QQaON2pk1=6w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Sep 2, 2021 at 3:16 PM Shubham Mittal <mittalshubham30(at)gmail(dot)com>
wrote:
> Hi ,
>
> *Please help in optimizing this query. I need to actually generate reports
> daily using this query.. It takes almost 15 to 20 min to execute this query
> due to joins.. *
> *Here common_details is a jsonB column.*
>
> SELECT T.order_id,
> ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
> text )
> ->> 'srType' :: text
> AS
> product,
> ( ( T.common_details -> 'commonDetails' :: text ) -> 'bundle' ::
> text )
> ->> 'mobileNumber' :: text
> AS
> msisdn,
>
Use jsonb_populate_recordset (or one of its siblings) to get rid of as many
of these key-based value extraction operations as possible and build a
table from the contents of the jsonb.
Possibly into a temporary table to which you add indexes.
David J.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2021-09-02 23:22:46 | Re: Query takes around 15 to 20 min over 20Lakh rows |
Previous Message | Tom Lane | 2021-09-02 23:00:46 | Re: dup(0) fails on Ubuntu 20.04 and macOS 10.15 with 13.0 |