| From: | Peter Geoghegan <pg(at)heroku(dot)com> |
|---|---|
| To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
| Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: memory explosion on planning complex query |
| Date: | 2014-11-26 22:26:01 |
| Message-ID: | CAM3SWZQ9BcTc6n4tAY5Vdx+wS5Mo7DweLXW4y4CQWaRkNtESuw@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Wed, Nov 26, 2014 at 2:00 PM, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
> The client's question is whether this is not a bug. It certainly seems like
> it should be possible to plan a query without chewing up this much memory,
> or at least to be able to limit the amount of memory that can be grabbed
> during planning. Going from humming along happily to OOM conditions all
> through running "explain <somequery>" is not very friendly.
Have you tried this with a "#define SHOW_MEMORY_STATS" build, or
otherwise rigged Postgres to call MemoryContextStats() at interesting
times?
--
Peter Geoghegan
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tomas Vondra | 2014-11-26 22:26:34 | Re: BUG #12071: Stat collector went crasy (50MB/s constant writes) |
| Previous Message | Maxim Boguk | 2014-11-26 22:06:19 | Re: BUG #12071: Stat collector went crasy (50MB/s constant writes) |