From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Why query takes soo much time |
Date: | 2011-05-16 14:15:59 |
Message-ID: | 15163.1305555359@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Denis de Bernardy <ddebernardy(at)yahoo(dot)com> writes:
> An alternative plan could have been to hash join the tables together,
> to sort the result set, and to apply the limit/offset on the resulting
> set.
Indeed. I rather wonder why the planner didn't do that to start with.
This plan looks to me like it might be suffering from insufficient
work_mem to allow use of a hash join. Or possibly the OP changed some
of the cost_xxx or enable_xxx settings in a misguided attempt to force
it to use indexes instead. As a rule of thumb, whole-table joins
probably ought not be using nestloop plans, and that frequently means
that indexes are worthless for them.
But in any case, as Craig noted, the real elephant in the room is the
huge OFFSET value. It seems likely that this query is not standing
alone but is meant as one of a series that's supposed to provide
paginated output, and if so the total cost of the series is just going
to be impossible no matter what. The OP needs to think about using a
cursor or some such to avoid repeating most of the work each time.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2011-05-16 14:31:51 | Re: [PERFORMANCE] expanding to SAN: which portion best to move |
Previous Message | Merlin Moncure | 2011-05-16 13:47:09 | Re: KVP table vs. hstore - hstore performance (Was: Postgres NoSQL emulation) |