From: | Ron <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | pgsql-admin(at)lists(dot)postgresql(dot)org |
Subject: | Re: need suggestion on querying big tables |
Date: | 2022-12-05 14:45:08 |
Message-ID: | 8c748c20-84bd-ed7b-c10f-35d640c6994b@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On 12/5/22 01:35, Holger Jakobs wrote:
> Am 05.12.22 um 07:29 schrieb Ebin Jozer:
>> Hi Team,
>> in postgresql 11 version we have two tables of size 435 GB and 347 GB.
>> if we query on single table or if we are doing inner join on both the big
>> tables, it is not displacing any output, it keeps running
>>
>> We can see the wait event is IO and directDatafile .
>>
>> Server Spec : 8 cores and 64GB RAM
>> PG config : 53 GB(effective_cache), 12 GB(shared buffer)
>>
>> can you please suggest some ideas , how we can query on big tables and
>> fasten them to get the output??
>>
>> Thanks & Regards,
>> /Ebin/
>
> There are basically two "secrets" to performance in this case:
>
> 1. make sure that the join operation uses indexes
> 2. make sure that you have enough work_mem available
>
> work_mem can be set in postgresql.conf, but also in every session
> individually. So don't set it too high generally, because it will be
> allocated for every sorting and hashing operation.
>
A "just smart enough" user could perform an inadvertent Denial of Service
attack on the database by cranking his session work_mem really high, no?
--
Angular momentum makes the world go 'round.
From | Date | Subject | |
---|---|---|---|
Next Message | John Wiencek | 2022-12-05 14:50:17 | Re: need suggestion on querying big tables |
Previous Message | Paul Smith | 2022-12-05 12:18:12 | Re: need suggestion on querying big tables |