From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Faheem Mitha <faheem(at)email(dot)unc(dot)edu> |
Cc: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: experiments in query optimization |
Date: | 2010-03-30 19:59:46 |
Message-ID: | 603c8f071003301259u140d8556vc54ff46855013bd3@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Mar 30, 2010 at 12:30 PM, Faheem Mitha <faheem(at)email(dot)unc(dot)edu> wrote:
> Sure, but define sane setting, please. I guess part of the point is that I'm
> trying to keep memory low, and it seems this is not part of the planner's
> priorities. That it, it does not take memory usage into consideration when
> choosing a plan. If that it wrong, let me know, but that is my
> understanding.
I don't understand quite why you're confused here. We've already
explained to you that the planner will not employ a plan that uses
more than the amount of memory defined by work_mem for each sort or
hash.
Typical settings for work_mem are between 1MB and 64MB. 1GB is enormous.
>>>> You might need to create some indices, too.
>>>
>>> Ok. To what purpose? This query picks up everything from the
>>> tables and the planner does table scans, so conventional wisdom
>>> and indeed my experience, says that indexes are not going to be so
>>> useful.
>>
>> There are situations where scanning the entire table to build up a
>> hash table is more expensive than using an index. Why not test it?
>
> Certainly, but I don't know what you and Robert have in mind, and I'm not
> experienced enough to make an educated guess. I'm open to specific
> suggestions.
Try creating an index on geno on the columns that are being used for the join.
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Brian Cox | 2010-03-31 04:11:36 | query has huge variance in execution times |
Previous Message | Tom Lane | 2010-03-30 17:50:56 | Re: temp table "on commit delete rows": transaction overhead |