Re: How should we design our tables and indexes

From: veem v <veema0000(at)gmail(dot)com>
To: Greg Sabino Mullane <htamfids(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: How should we design our tables and indexes
Date: 2024-02-12 19:04:48
Message-ID: CAB+=1TWM2Pwg3ReD7qsGQNpPGyNwo+4m1SS_eMbt63ai=qbMOg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Thank You.

On Mon, 12 Feb 2024 at 22:17, Greg Sabino Mullane <htamfids(at)gmail(dot)com>
wrote:

> Sure will try to test and see how it behaves when the number of
>> simultaneous queries (here 32/4=8 concurrent queries) exceed the
>> max_parallel_workers limit. Though I am expecting the further queries
>> exceeding the limit might get serialized.
>>
>
> Yes - if there are not enough workers available, it will run with a
> reduced number of workers, including possibly zero. You can see that when
> you run an explain analyze, it will show you the number of workers it wants
> and the number if actually was able to get.
>
>
Thank you . Got the point.

If these quick queries ran within a certain time frame (say for a duration
of ~1hr) and few of the executions ran longer(which might be because of the
less parallel workers as the max limit exhausted because of concurrent
executions,
OR
It may be because of the change in the execution path for certain execution
of the queries.

Is there any way to track those historical executions and be able to find
the exact root cause of the slow executions confidently?

Regards
Veem

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Greg Sabino Mullane 2024-02-12 19:12:36 Re: How should we design our tables and indexes
Previous Message Adrian Klaver 2024-02-12 17:01:54 Re: Query hangs (and then timeout) after using COPY to import data