Re: Suggestion for memory parameters

From: yudhi s <learnerdatabase99(at)gmail(dot)com>
To: veem v <veema0000(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Suggestion for memory parameters
Date: 2024-09-27 06:36:58
Message-ID: CAEzWdqfHyAa7=WNyJLiRCXGzJvEjkFP1MHG2ontwCcbx7TdUBQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Sep 27, 2024 at 9:11 AM veem v <veema0000(at)gmail(dot)com> wrote:

>
> On Thu, 26 Sept 2024 at 16:33, yudhi s <learnerdatabase99(at)gmail(dot)com>
> wrote:
>
>> Hello All,
>>
>> In a RDS postgres we are seeing some select queries when running and
>> doing sorting on 50 million rows(as its having order by clause in it) , the
>> significant portion of wait event is showing as "IO:BufFileWrite" and it
>> runs for ~20minutes+.
>>
>> Going through the document in the link below, it states we should monitor
>> the "FreeLocalStorage" metric and when monitoring that, I see it showing up
>> to ~535GB as the max limit and when these queries run this goes down till
>> 100GB. Note- (it's a R7g8xl instance)
>>
>>
>> https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/apg-waits.iobuffile.html
>>
>> We were thinking of bumping up the work_mem to a higher value in database
>> level , which is currently having size 4MB default. But we will also have
>> ~100 sessions running at same time and majority were from other
>> applications which execute other single row "insert" queries and I hope
>> that will not need high "work_mem" . And setting it at database level will
>> consume 100 times that set work_mem value. So how to handle this situation?
>> Or
>> Is it fine to let it use "FreeLocalStorage" unless it goes till zero?
>>
>> Also I am confused between the local storage (which is showing as 535GB)
>> vs the memory/RAM which is 256GB for this instance class with ~128TB max
>> storage space restriction, how are these storage different, (mainly the
>> 535GB space which it's showing vs the 128TB storage space restriction)?
>> Appreciate your guidance.
>>
>> select query looks something as below with no Joins but just single table
>> fetch:-
>>
>> Select....
>> from <table_name>
>> where
>> order by column1, column2 LIMIT $b1 OFFSET $B2 ;
>>
>>
>>
> My 2 cents
> I think you should set the work_mem on specific session level , if your
> sorting queries are only from specific handful of sessions, as because
> setting it up at database level will eat up your most of RAM(which you said
> is 256GB) and you said 100+ sessions getting spawned at any point in time.
>

Thank you.
When I checked pg_stat_statements for this query , and divided the
temp_blk_read+temp_blk_written with the "calls", it came as ~1million which
means ~7GB. So does that mean ~7GB of work_mem should be allocated for this
query?

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Peter J. Holzer 2024-09-27 22:27:42 Re: Request for Insights on ID Column Migration Approach
Previous Message Muhammad Usman Khan 2024-09-27 03:56:49 Re: Request for Insights on ID Column Migration Approach