From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Phil Endecott" <spam_from_postgresql_general(at)chezphil(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: "explain analyse" much slower than actual query |
Date: | 2007-01-28 21:28:15 |
Message-ID: | 22445.1170019695@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Phil Endecott" <spam_from_postgresql_general(at)chezphil(dot)org> writes:
> If I understand it correctly, it is still doing a sequential scan on
> part_tsearch that does not terminate early due to the limit clause. So
> I'm still seeing run times that are rather worse than I think should be
> possible. Can it not step through the indexes in the way that it does
> for a Merge Join until it has got enough results to satisfy the limit,
> and then terminate?
Nope, there is not that much intelligence about NOT IN.
You could possibly manually rewrite the thing as a LEFT JOIN
with a WHERE inner-join-key IS NULL clause. This would probably
lose if most of the outer relation's rows join to many inner rows,
though.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2007-01-28 21:41:01 | Re: PostgreSQL data loss |
Previous Message | garry saddington | 2007-01-28 21:18:50 | Re: counting query |