From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Sergey Burladyan <eshkinkot(at)gmail(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Looks like merge join planning time is too big, 55 seconds |
Date: | 2013-08-02 15:29:34 |
Message-ID: | CAMkU=1x51iVmUcLewMUBLB3fKW9tkpfsL0iYQuXp33aTAiQVPA@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Aug 1, 2013 at 5:16 PM, Sergey Burladyan <eshkinkot(at)gmail(dot)com> wrote:
> I also find this trace for other query:
> explain select * from xview.user_items_v v where ( v.item_id = 132358330 );
>
>
> If I not mistaken, may be two code paths like this here:
> (1) mergejoinscansel -> scalarineqsel-> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext
> (2) scalargtsel -> scalarineqsel -> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext
Yeah, I think you are correct.
> And may be get_actual_variable_range() function is too expensive for
> call with my bloated table items with bloated index items_user_id_idx on it?
But why is it bloated in this way? It must be visiting many thousands
of dead/invisible rows before finding the first visible one. But,
Btree index have a mechanism to remove dead tuples from indexes, so it
doesn't follow them over and over again (see "kill_prior_tuple"). So
is that mechanism not working, or are the tuples not dead but just
invisible (i.e. inserted by a still open transaction)?
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2013-08-02 15:35:45 | Re: Looks like merge join planning time is too big, 55 seconds |
Previous Message | slapo | 2013-08-02 13:43:16 | Sub-optimal plan for a paginated query on a view with another view inside of it. |