From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | stiening(at)comcast(dot)net |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: BUG #8013: Memory leak |
Date: | 2013-03-31 17:16:12 |
Message-ID: | 12427.1364750172@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
stiening(at)comcast(dot)net writes:
> The query:
> SELECT pts_key,count(*)
> FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
> pts_key
> Which is executed as:
> GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
> Filter: (count(*) <> 1)
> -> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
> Sort Key: pts_key
> -> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
> width=4)
> uses all available memory (32GB). pts_key is an integer and the table
> contains about 500 million rows.
That query plan doesn't look like it should produce any undue memory
consumption on the server side. How many distinct values of pts_key are
there, and what are you using to collect the query result client-side?
psql, for instance, would try to absorb the whole query result
in-memory, so there'd be a lot of memory consumed by psql if there are
a lot of pts_key values. (You can set FETCH_COUNT to alleviate that.)
A different line of thought is that you might have set work_mem to
an unreasonably large value --- the sort step will happily try to
consume work_mem worth of memory.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2013-03-31 17:32:17 | BUG #8013: Memory leak |
Previous Message | ajmcello | 2013-03-31 16:44:49 | Re: BUG #8013: Memory leak |