From: | Marc Millas <marc(dot)millas(at)mokadb(dot)com> |
---|---|
To: | Avin Kavish <avin(at)baseboard(dot)ai> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, David Rowley <dgrowleyml(at)gmail(dot)com>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: bug or lacking doc hint |
Date: | 2023-06-26 12:22:52 |
Message-ID: | CADX_1abEKKW8SSYjB_Pe3EZyBkX2dG3SXX1w6tXxrnve5GWBxw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Jun 26, 2023 at 5:47 AM Avin Kavish <avin(at)baseboard(dot)ai> wrote:
> Sounds like the problem you are having is, the server is running out of
> temporary resources for the operation that users are trying to do. So
> according to Tom, on the postgres side, the operation cannot be optimized
> further.
>
> I think you have few choices here,
> - See if increasing the resources of the server will allow them to run
> the operation
> - Ask users not to do that operation
> - Use a extension like citus to scale horizontally
>
> But I'm thinking why a massively inefficient join is needed in the first
> place. Shouldn't joins be for following keyed relationships. So ideally a
> unique indexed column, but at the very least an indexed column. Why is a
> join required on a dynamically calculated substring? Can it be made into a
> static computed value and indexed? Substring sounds like an op that should
> be in the filter stage.
>
> Can you describe your data model? Maybe we can give some specific advice.
>
There is a set of big tables containing fine grain health data. The DB is
accessed by people doing research on various kind of sicknesses. So, by
nature, all columns (I mean ALL) can be used for every kind of SQL
including, obviously, lots of intricated joins.
From time to time, People write requests that may take more than 2 days to
complete.
So the idea of indexing 'appropriate' columns translate in indexing all
columns, of all tables, including the big ones with 30+ columns.
with only main keys indexes, the DB is already 15TB+.
and my own experience of putting 30 indexes on one table is not very
positive.
so...
BTW rewriting the original request using cte and union does complete in 134
seconds, doing 3 merge join.
And I have one more question: the explain analyze plan shows that Postgres
decided to do external sorts using around 2 GB of disk space.
I did a set work_mem to '4GB' to try to have those sorts in memory. No
effect. How can I tell the planner to do those sort in memory ?? thanks
> Regards,
> Avin
>
> On Mon, Jun 26, 2023 at 3:57 AM Marc Millas <marc(dot)millas(at)mokadb(dot)com>
> wrote:
>
>>
>> On Sun, Jun 25, 2023 at 11:48 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>
>>> David Rowley <dgrowleyml(at)gmail(dot)com> writes:
>>> > The problem is that out of the 3 methods PostgreSQL uses to join
>>> > tables, only 1 of them supports join conditions with an OR clause.
>>> > Merge Join cannot do this because results can only be ordered one way
>>> > at a time. Hash Join technically could do this, but it would require
>>> > that it built multiple hash tables. Currently, it only builds one
>>> > table. That leaves Nested Loop as the join method to implement joins
>>> > with OR clauses. Unfortunately, nested loops are quadratic and the
>>> > join condition must be evaluated once per each cartesian product row.
>>>
>>> We can do better than that if the OR'd conditions are each amenable
>>> to an index scan on one of the tables: then it can be a nestloop with
>>> a bitmap-OR'd inner index scan. I thought the upthread advice to
>>> convert the substr() condition into something that could be indexed
>>> was on-point.
>>>
>> ok. but one of the tables within the join(s) tables is 10 billions rows,
>> splitted in 120 partitions. Creating something like 20 more indexes to
>> fulfill that condition do have its own problems.
>>
>>>
>>> > Tom Lane did start some work [1] to allow the planner to convert some
>>> > queries to use UNION instead of evaluating OR clauses, but, if I
>>> > remember correctly, it didn't handle ORs in join conditions, though
>>> > perhaps having it do that would be a natural phase 2. I don't recall
>>> > why the work stopped.
>>>
>>> As I recall, I was having difficulty convincing myself that
>>> de-duplication of results (for cases where the same row satisfies
>>> more than one of the OR'd conditions) would work correctly.
>>> You can't just blindly make it a UNION because that might remove
>>> identical rows that *should* appear more than once in the result.
>>>
>>
>> I did rewrite the query using a cte and union(s). For that query, no
>> dedup point.
>> But my pb is that that DB will be used by a bunch of people writing raw
>> SQL queries, and I cannot let them write queries that are going to go on
>> for ages, and eventually crash over temp_file_limit after hours every now
>> and then.
>> So, my understanding of the above is that I must inform the users NOT to
>> use OR clauses into joins.
>> which maybe a pb by itself.
>> regards
>> Marc
>>
>>
>>> regards, tom lane
>>>
>>
>> Marc MILLAS
>>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2023-06-26 13:52:25 | Re: A question about the postgres's website |
Previous Message | Joe Conway | 2023-06-26 12:22:50 | Re: PostgreSQL Commercial Support |