GEQO optimizer (was Re: Backend message type 0x44 arrived while idle)

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Oleg Bartunov <oleg(at)sai(dot)msu(dot)su>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: GEQO optimizer (was Re: Backend message type 0x44 arrived while idle)
Date: 1999-05-17 00:57:40
Message-ID: 6087.926902660@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I wrote:
> Oleg Bartunov <oleg(at)sai(dot)msu(dot)su> writes:
>> WHile testing 6.5 cvs to see what's the progress with capability
>> of Postgres to work with big joins I get following error messages:

> I think there are still some nasty bugs in the GEQO planner.

I have just committed some changes that fix bugs in the GEQO planner
and limit its memory usage. It should now be possible to use GEQO even
for queries that join a very large number of tables --- at least from
the standpoint of not running out of memory during planning. (It can
still take a while :-(. I think that the default GEQO parameter
settings may be configured to use too many generations, but haven't
poked at this yet.)

I have observed that the regular optimizer requires about 50MB to plan
some ten-way joins, and can exceed my system's 128MB process data limit
on some eleven-way joins. We currently have the GEQO threshold set at
11, which prevents the latter case by default --- but 50MB is a lot.
I wonder whether we shouldn't back the GEQO threshold off to 10.
(When I suggested setting it to 11, I was only looking at speed relative
to GEQO, not memory usage. There is now a *big* difference in memory
usage...) Comments?

regards, tom lane

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 1999-05-17 01:01:24 Re: [HACKERS] select + order by
Previous Message Tatsuo Ishii 1999-05-17 00:50:19 Re: [HACKERS] select + order by