Re: Hash Aggregate plan picked for very large table == out of memory

From: "Mason Hale" <masonhale(at)gmail(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Hash Aggregate plan picked for very large table == out of memory
Date: 2007-06-14 21:09:42
Message-ID: 8bca3aa10706141409oa6ddc6fk305b446f45229bba@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I should have mentioned this previously: running the same query against the
same data on 8.1.5 does not result in a hash aggregate plan or an out of
memory error. (Note: the hardware is different but very similar -- the main
difference is the 8.1.9 server (with the error) has faster disks)

On 6/14/07, Mason Hale <masonhale(at)gmail(dot)com> wrote:
>
> Thanks Tom. Here's more info:
>
> What have you got work_mem set to?
>
>
> 40960
>
> What's the actual number of groups
> > (target_page_id values)?
>
>
> Approximately 40 million (I'll have a more precise number when the query
> finishes running ).
>
> Maybe this helps?
>
> crystal=> select null_frac, n_distinct, correlation from pg_stats where
> tablename = 'page_page_link' and attname = 'target_page_id';
> null_frac | n_distinct | correlation
> -----------+------------+-------------
> 0 | 550017 | 0.240603
> (1 row)
>
> Mason
>
>
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message gmoudry 2007-06-14 21:25:14 ANN: Linq provider for PostgreSQL
Previous Message Mason Hale 2007-06-14 21:04:00 Re: Hash Aggregate plan picked for very large table == out of memory