From: | Matthew Wakeling <matthew(at)flymine(dot)org> |
---|---|
To: | Alan McKay <alan(dot)mckay(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: query memory consumption |
Date: | 2009-09-22 10:58:27 |
Message-ID: | alpine.DEB.2.00.0909221154340.19472@aragorn.flymine.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, 21 Sep 2009, Alan McKay wrote:
> We have explain and analyze which tell us about the cost of a query
> time-wise, but what does one use to determine (and trace / predict?)
> memory consumption?
In Postgres, memory consumption for all operations is generally capped at
the value of work_mem. However, a given query can consist of more than one
operation. Generally, only heavy things like sorts and hashes consume
work_mem, so it should be possible to look at the explain to count those,
multiply by work_mem, and get the maximum amount of RAM that the query can
use.
However, sometimes a query will not fit neatly into work_mem. At this
point, Postgres will write the data to temporary files on disc. It is
harder to predict what size those will be. However, EXPLAIN ANALYSE will
sometimes give you a figure of how big a sort was for example.
Matthew
--
Reality is that which, when you stop believing in it, doesn't go away.
-- Philip K. Dick
From | Date | Subject | |
---|---|---|---|
Next Message | Dimitri Fontaine | 2009-09-22 11:35:51 | Re: Slow select times on select with xpath |
Previous Message | Magnus Hagander | 2009-09-22 06:42:53 | Re: statement stats extra load? |