Re: Hash Aggregate plan picked for very large table == out of memory

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Mason Hale" <masonhale(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Hash Aggregate plan picked for very large table == out of memory
Date: 2007-06-14 21:40:20
Message-ID: 20937.1181857220@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

"Mason Hale" <masonhale(at)gmail(dot)com> writes:
>> What's the actual number of groups
>> (target_page_id values)?

> Approximately 40 million (I'll have a more precise number when the query
> finishes running ).

Ouch. The discrepancy between that and the 550K estimate is what's
killing you --- the hash table will be 80 times bigger than the planner
thinks. You sure that increasing the stats target doesn't improve
matters? (I suppose the distribution is one of these "long tail"
affairs...)

The best solution might be to reduce work_mem --- 40MB is fairly high
for a general-purpose setting anyway IMO, since you need to worry about
complex queries eating multiples of work_mem.

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Francisco Reyes 2007-06-14 22:10:46 Re: pg_restore out of memory
Previous Message gmoudry 2007-06-14 21:25:14 ANN: Linq provider for PostgreSQL