From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Jon Nelson <jnelson+pgsql(at)jamponi(dot)net>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: queries with lots of UNIONed relations |
Date: | 2011-01-13 22:41:40 |
Message-ID: | AANLkTikH4eyp5JwJpUEkfOX4u2hOemGDCoXPEf01dkAv@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Jan 13, 2011 at 5:26 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> On Thu, Jan 13, 2011 at 3:12 PM, Jon Nelson <jnelson+pgsql(at)jamponi(dot)net> wrote:
>>> I still think that having UNION do de-duplication of each contributory
>>> relation is a beneficial thing to consider -- especially if postgresql
>>> thinks the uniqueness is not very high.
>
>> This might be worth a TODO.
>
> I don't believe there is any case where hashing each individual relation
> is a win compared to hashing them all together. If the optimizer were
> smart enough to be considering the situation as a whole, it would always
> do the latter.
You might be right, but I'm not sure. Suppose that there are 100
inheritance children, and each has 10,000 distinct values, but none of
them are common between the tables. In that situation, de-duplicating
each individual table requires a hash table that can hold 10,000
entries. But deduplicating everything at once requires a hash table
that can hold 1,000,000 entries.
Or am I all wet?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2011-01-13 22:42:41 | Re: queries with lots of UNIONed relations |
Previous Message | Tom Lane | 2011-01-13 22:26:22 | Re: queries with lots of UNIONed relations |