From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | wickro <robwickert(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: work_mem greater than 2GB issue |
Date: | 2009-05-14 15:11:14 |
Message-ID: | 87eiuru3cd.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
wickro <robwickert(at)gmail(dot)com> writes:
> Hi everyone,
>
> I have a largish table (> 8GB). I'm doing a very simple single group
> by on. I am the only user of this database. If I set work mem to
> anything under 2GB (e.g. 1900MB) the postmaster process stops at that
> value while it's peforming it's group by. There is only one hash
> operation so that is what I would expect. But anything larger and it
> eats up all memory until it can't get anymore (around 7.5GB on a 8GB
> machine). Has anyone experienced anything of this sort before.
What does EXPLAIN say for both cases? I suspect what's happening is that the
planner is estimating it will need 2G to has all the values and in fact it
would need >8G. So for values under 2G it uses a sort and not a hash at all,
for values over 2G it's trying to use a hash and failing.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!
From | Date | Subject | |
---|---|---|---|
Next Message | Allan Kamau | 2009-05-14 15:15:06 | Re: how to extract data from bytea so it is be used in blob for mysql database |
Previous Message | Jonathan Groll | 2009-05-14 15:10:32 | Re: Failure during initdb - creating dictionaries ... FATAL: could not access file "$libdir/libdict_snowball": No such file or directory |