Re: Death postgres

From: Marc Millas <marc(dot)millas(at)mokadb(dot)com>
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Death postgres
Date: 2023-05-10 20:52:47
Message-ID: CADX_1aZk7TJMrLUjtLC=Bsb9XmM3W8eyU2AyOrXTj1=ZnMzTHQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wed, May 10, 2023 at 7:24 PM Peter J. Holzer <hjp-pgsql(at)hjp(dot)at> wrote:

> On 2023-05-10 16:35:04 +0200, Marc Millas wrote:
> > Unique (cost=72377463163.02..201012533981.80 rows=1021522829864
> width=97)
> > -> Gather Merge (cost=72377463163.02..195904919832.48
> rows=1021522829864 width=97)
> ...
> > -> Parallel Hash Left Join
> (cost=604502.76..1276224253.51 rows=204304565973 width=97)
> > Hash Cond: ((t1.col_ano)::text = (t2.col_ano)::text)
> ...
> >
> > //so.. the planner guess that those 2 join will generate 1000 billions
> rows...
>
> Are some of the col_ano values very frequent? If say the value 42 occurs
> 1 million times in both table_a and table_b, the join will create 1
> trillion rows for that value alone. That doesn't explain the crash or the
> disk usage, but it would explain the crazy cost (and would probably be a
> hint that this query is unlikely to finish in any reasonable time).
>
> hp
>
> good guess, even if a bit surprising: there is one (and only one) "value"
which fit your supposition: NULL
750000 in each table which perfectly fit the planner rows estimate.
One question: what is postgres doing when it planned to hash 1000 billions
rows ?
Did postgres create an appropriate ""space"" to handle those 1000 billions
hash values ?
thanks,
MM

> --
> _ | Peter J. Holzer | Story must make more sense than reality.
> |_|_) | |
> | | | hjp(at)hjp(dot)at | -- Charles Stross, "Creative writing
> __/ | http://www.hjp.at/ | challenge!"
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Peter J. Holzer 2023-05-10 23:56:22 Re: Death postgres
Previous Message Peter J. Holzer 2023-05-10 17:23:59 Re: Death postgres