From: | Jeremy Finzel <finzelj(at)gmail(dot)com> |
---|---|
To: | Andrew Gierth <andrew(at)tao11(dot)riddles(dot)org(dot)uk> |
Cc: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Size estimation of postgres core files |
Date: | 2019-02-15 19:01:50 |
Message-ID: | CAMa1XUgd79VBYFHs=uTQB+YkXQjw-YxEQtB=USwL4BDPAVJD4g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>
> It doesn't write out all of RAM, only the amount in use by the
> particular backend that crashed (plus all the shared segments attached
> by that backend, including the main shared_buffers, unless you disable
> that as previously mentioned).
>
> And yes, it can take a long time to generate a large core file.
>
> --
> Andrew (irc:RhodiumToad)
>
Based on the Alvaro's response, I thought it is reasonably possible that
that *could* include nearly all of RAM, because that was my original
question. If shared buffers is say 50G and my OS has 1T, shared buffers is
a small portion of that. But really my question is what should we
reasonably assume is possible - meaning what kind of space should I
provision for a volume to be able to contain the core dump in case of
crash? The time of writing the core file would definitely be a concern if
it could indeed be that large.
Could someone provide more information on exactly how to do that
coredump_filter?
We are looking to enable core dumps to aid in case of unexpected crashes
and wondering if there are any recommendations in general in terms of
balancing costs/benefits of enabling core dumps.
Thank you!
Jeremy
From | Date | Subject | |
---|---|---|---|
Next Message | Andre Piwoni | 2019-02-15 20:03:39 | Re: Promoted slave tries to archive previously archived WAL file |
Previous Message | Adrian Klaver | 2019-02-15 18:31:41 | Re: Problems pushing down WHERE-clause to underlying view |