From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | "Thomas Munz" <thomas(at)ecommerce(dot)com> |
Cc: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Wired behavor with LIMIT |
Date: | 2007-05-25 13:50:23 |
Message-ID: | 877iqxko68.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Thomas Munz" <thomas(at)ecommerce(dot)com> writes:
> 100.000 entries less then the second one )
> is much slower then selecting all entries. This query was also 100 times
> executed with allways the same result.
>
> explain ANALYZE select * from hd_conversation where action_int is null limit
> 1552888;explain ANALYZE select * from hd_conversation where action_int is null;
What are the results if you run the query without analyze. Use \timing to get
the timing results, and use count(*) to avoid timing network speed:
SELECT count(*) FROM (SELECT * FROM hd_conversation ... LIMIT 100000)
I think your timing results are being dominated by the overhead it takes to
actually measure the time spent in each node. With the limit node it has to
call gettimeofday nearly twice as often. That works out to about 2.5us per
gettimeofday which seems about right.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-05-25 13:52:27 | Re: ERROR: cache lookup failed for type 0 |
Previous Message | Thomas Munz | 2007-05-25 13:18:45 | Re: Wired behavor with LIMIT |