From: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
---|---|
To: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: memory explosion on planning complex query |
Date: | 2014-11-27 00:24:02 |
Message-ID: | 54766F22.8050409@dunslane.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 11/26/2014 05:00 PM, Andrew Dunstan wrote:
>
> Attached is some anonymized DDL for a fairly complex schema from a
> PostgreSQL Experts client. Also attached is an explain query that runs
> against the schema. The client's problem is that in trying to run the
> explain, Postgres simply runs out of memory. On my untuned 9.3 test
> rig, (Scientific Linux 6.4 with 24Gb of RAM and 24Gb of swap) vmstat
> clearly shows the explain chewing up about 7Gb of memory. When it's
> done the free memory jumps back to where it was. On a similar case on
> the clients test rig we saw memory use jump lots more.
>
> The client's question is whether this is not a bug. It certainly seems
> like it should be possible to plan a query without chewing up this
> much memory, or at least to be able to limit the amount of memory that
> can be grabbed during planning. Going from humming along happily to
> OOM conditions all through running "explain <somequery>" is not very
> friendly.
>
Further data point - thanks to Andrew Gierth (a.k.a. RhodiumToad) for
pointing this out. The query itself grabs about 600Mb to 700Mb to run,
whereas the EXPLAIN takes vastly more - on my system 10 times more.
Surely that's not supposed to happen?
cheers
andrew
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2014-11-27 00:59:39 | Re: INSERT ... ON CONFLICT {UPDATE | IGNORE} |
Previous Message | Jeff Janes | 2014-11-27 00:13:41 | Re: 9.2 recovery/startup problems |