From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | wickro <robwickert(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: work_mem greater than 2GB issue |
Date: | 2009-05-14 15:06:59 |
Message-ID: | 11901.1242313619@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
wickro <robwickert(at)gmail(dot)com> writes:
> I have a largish table (> 8GB). I'm doing a very simple single group
> by on. I am the only user of this database. If I set work mem to
> anything under 2GB (e.g. 1900MB) the postmaster process stops at that
> value while it's peforming it's group by. There is only one hash
> operation so that is what I would expect. But anything larger and it
> eats up all memory until it can't get anymore (around 7.5GB on a 8GB
> machine). Has anyone experienced anything of this sort before.
It's possible that you've found a bug, but you have not provided nearly
enough information to let anyone reproduce it for investigation.
What Postgres version is this exactly? Is it a 32- or 64-bit build?
What is the exact query you're executing, and what does EXPLAIN show
as its plan?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jonathan Groll | 2009-05-14 15:10:32 | Re: Failure during initdb - creating dictionaries ... FATAL: could not access file "$libdir/libdict_snowball": No such file or directory |
Previous Message | Adrian Klaver | 2009-05-14 14:38:15 | Re: postgresql on windows98 |