From: | Antonin Houska <ah(at)cybertec(dot)at> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: memory explosion on planning complex query |
Date: | 2014-11-26 22:41:24 |
Message-ID: | 54765714.3030205@cybertec.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 11/26/2014 11:00 PM, Andrew Dunstan wrote:
>
> Attached is some anonymized DDL for a fairly complex schema from a
> PostgreSQL Experts client. Also attached is an explain query that runs
> against the schema. The client's problem is that in trying to run the
> explain, Postgres simply runs out of memory. On my untuned 9.3 test rig,
> (Scientific Linux 6.4 with 24Gb of RAM and 24Gb of swap) vmstat clearly
> shows the explain chewing up about 7Gb of memory. When it's done the
> free memory jumps back to where it was. On a similar case on the clients
> test rig we saw memory use jump lots more.
>
> The client's question is whether this is not a bug. It certainly seems
> like it should be possible to plan a query without chewing up this much
> memory, or at least to be able to limit the amount of memory that can be
> grabbed during planning. Going from humming along happily to OOM
> conditions all through running "explain <somequery>" is not very friendly.
It's not trivial to track the whole hierarchy of views, but I think it
can result in the FROM list or some JOIN lists being too long. How about
setting from_collapse_limit / join_collapse_limit to lower-than-default
value ?
--
Antonin Houska
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de, http://www.cybertec.at
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2014-11-26 22:41:55 | Re: memory explosion on planning complex query |
Previous Message | Tomas Vondra | 2014-11-26 22:40:52 | Re: memory explosion on planning complex query |