Re: Hash Aggregate plan picked for very large table == out of memory

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Mason Hale" <masonhale(at)gmail(dot)com>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: Hash Aggregate plan picked for very large table == out of memory
Date: 2007-06-14 23:12:59
Message-ID: 873b0ut9j8.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

"Mason Hale" <masonhale(at)gmail(dot)com> writes:

> The default_statistics_target was originally 200.
> I upped it to 1000 and still get the same results.

You did analyze the table after upping the target right? Actually I would
expect you would be better off not raising it so high globally and just
raising it for this one table with

ALTER [ COLUMN ] column SET STATISTICS integer

> I am working around this by setting enable_hashagg = off -- but it just
> seems like a case where the planner is not picking the strategy?

Sadly guessing the number of distinct values from a sample is actually a
pretty hard problem. How many distinct values do you get when you run with
enable_hashagg off?

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Gregory Stark 2007-06-14 23:20:35 Re: pg_restore out of memory
Previous Message Francisco Reyes 2007-06-14 22:10:46 Re: pg_restore out of memory