Re: R: repeated out of shared memory error - not related to max_locks_per_transaction

From: MichaelDBA <MichaelDBA(at)sqlexec(dot)com>
To: Alfonso Moscato <alfonso(dot)moscato(at)merqurio(dot)it>
Cc: 'Fabio Pardi' <f(dot)pardi(at)portavita(dot)eu>, pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Re: R: repeated out of shared memory error - not related to max_locks_per_transaction
Date: 2018-07-20 14:37:31
Message-ID: 5B51F3AB.1050608@sqlexec.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

coolio, work_mem is a red herring here. I think me and Fabio are on the
same page now since his last elaboration about work_mem.

Regards,
Michael Vitale
> Alfonso Moscato <mailto:alfonso(dot)moscato(at)merqurio(dot)it>
> Friday, July 20, 2018 10:23 AM
>
> Michael, Fabio,
>
> moreover, i get the message “Out of shared memory”, not “out of memory”.
>
> Anyway, I can confirm that when errors began there where more than
> 10gb of free memory.
>
> Regards
>
> Alfonso
>
> *Da:*Fabio Pardi <f(dot)pardi(at)portavita(dot)eu>
> *Inviato:* venerdì 20 luglio 2018 15:57
> *A:* MichaelDBA <MichaelDBA(at)sqlexec(dot)com>
> *Cc:* pgsql-admin(at)lists(dot)postgresql(dot)org
> *Oggetto:* Re: repeated out of shared memory error - not related to
> max_locks_per_transaction
>
> Michael,
>
> I think we are talking about 2 different scenarios.
>
> 1) the single operation is using more than work_mem -> gets spilled to
> disk. like: a big sort. That's what i mentioned.
>
> 2) there are many many concurrent operations, and one more of them
> wants to allocate work_mem but the memory on the server is exhausted
> at that point. -> in that case you will get 'out of memory'. That's
> what you are referring to.
>
> Given the description of the problem (RAM and Postgres settings) and
> the fact that Alfonso says that "there is a lot of free memory" i
> think is unlikely that we are in the second situation described here
> above.
>
>
> regards,
>
> fabio pardi
>
> On 20/07/18 15:28, MichaelDBA wrote:
>
> Fabio Pardi <mailto:f(dot)pardi(at)portavita(dot)eu>
> Friday, July 20, 2018 9:56 AM
>
> Michael,
>
> I think we are talking about 2 different scenarios.
>
> 1) the single operation is using more than work_mem -> gets spilled to
> disk. like: a big sort. That's what i mentioned.
>
> 2) there are many many concurrent operations, and one more of them
> wants to allocate work_mem but the memory on the server is exhausted
> at that point. -> in that case you will get 'out of memory'. That's
> what you are referring to.
>
>
> Given the description of the problem (RAM and Postgres settings) and
> the fact that Alfonso says that "there is a lot of free memory" i
> think is unlikely that we are in the second situation described here
> above.
>
>
> regards,
>
> fabio pardi
>
>
>
>
>
> On 20/07/18 15:28, MichaelDBA wrote:
>
> MichaelDBA <mailto:MichaelDBA(at)sqlexec(dot)com>
> Friday, July 20, 2018 9:28 AM
> wrong again, Fabio. PostgreSQL is not coded to manage memory usage in
> the way you think it does with work_mem. Here is a quote from Citus
> about the dangers of setting work_mem too high.
>
> When you consume more memory than is available on your machine you can
> start to see out of|out of memory|errors within your Postgres logs, or
> in worse cases the OOM killer can start to randomly kill running
> processes to free up memory. An out of memory error in Postgres simply
> errors on the query you’re running, where as the the OOM killer in
> linux begins killing running processes which in some cases might even
> include Postgres itself.
>
> When you see an|out of memory|error you either want to increase the
> overall RAM on the machine itself by upgrading to a larger instance OR
> you want to decrease the amount of memory that|work_mem|uses. Yes, you
> read that right: out-of-memory it’s better to
> decrease|work_mem|instead of increase since that is the amount of
> memory that can be consumed by each process and too many operations
> are leveraging up to that much memory.
>
>
> https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/
>
> Regards,
> Michael Vitale
>
> Fabio Pardi <mailto:f(dot)pardi(at)portavita(dot)eu>
> Friday, July 20, 2018 9:19 AM
>
> Nope Michael,
>
> if 'stuff' gets spilled to disk does not end up in an error.It will
> silently write a file to disk for the time being and then deleted it
> when your operation is finished.
>
> period.
>
> Based on your log settings, it might appear in the logs, under
> 'temporary file created..'.
>
>
> regards,
>
> fabio pardi
>
>
>
> On 20/07/18 15:00, MichaelDBA wrote:
>
> MichaelDBA <mailto:MichaelDBA(at)sqlexec(dot)com>
> Friday, July 20, 2018 9:00 AM
> I do not think that is true. Stuff just gets spilled to disk when the
> work_mem buffers would exceed the work_mem constraint. They are not
> constrained by what real memory is available, hence the memory error!
> They will try to get memory even if it is not available as long as
> work_mem buffers threshold is not reached.
>
> Regards,
> Michael Vitale
>
>
>

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Alfonso Moscato 2018-07-20 14:38:21 R: repeated out of shared memory error - not related to max_locks_per_transaction
Previous Message Alfonso Moscato 2018-07-20 14:23:10 R: repeated out of shared memory error - not related to max_locks_per_transaction