How should I specify work_mem/max_worker_processes if I want to do big queries now and then?

From: James(王旭) <wangxu(at)gu360(dot)com>
To: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: How should I specify work_mem/max_worker_processes if I want to do big queries now and then?
Date: 2019-11-20 07:56:05
Message-ID: tencent_3E7AB8853A9009844041E40C@qq.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello,

I am doing a query to fetch about 10000000 records in one time. But the query seems very slow, like "mission impossible".
I am very confident that these records should be fit into my&nbsp;shared_buffers settings(20G), and my query is totally on my index, which is this big:(19M x 100 partitions), this index size can also be put into shared_buffers easily.(actually I even made a new partial index which is smaller and delete the bigger old index)

This kind of situation makes me very disappointed.How can I make my queries much faster if my data grows more than 10000000 in one partition? I am using pg11.6.

Many thanks,
James

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Laurenz Albe 2019-11-20 08:00:52 Re: How should I specify work_mem/max_worker_processes if I want to do big queries now and then?
Previous Message Soukaina Lahchiouach 2019-11-20 07:22:28