From: | "Merlin Moncure" <mmoncure(at)gmail(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Postgresql Performance" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: is it possible to make this faster? |
Date: | 2006-05-26 16:56:44 |
Message-ID: | b42b73150605260956h30a960efl39d75601d5767747@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 5/26/06, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Well, this bears looking into, because I couldn't get anywhere near 20ms
> with mysql. I was using a dual Xeon 2.8GHz machine which ought to be
did you have a key on a,b,c? if I include unimportant unkeyed field d
the query time drops from 70ms to ~ 1 second. mysql planner is
tricky, it's full of special case optimizations...
select count(*) from (select a,b,max(c) group by a,b) q;
blows the high performance case as does putting the query in a view.
mysql> select version();
+-----------+
| version() |
+-----------+
| 5.0.16 |
+-----------+
1 row in set (0.00 sec)
mysql> set global query_cache_size = 0;
Query OK, 0 rows affected (0.00 sec)
mysql> select user_id, acc_id, max(sample_date) from usage_samples group by 1,2
[...]
+---------+--------+------------------+
939 rows in set (0.07 sec)
mysql> select user_id, acc_id, max(sample_date) from usage_samples group by 1,2
[...]
+---------+--------+------------------+--------------+
939 rows in set (1.39 sec)
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-05-26 17:07:39 | Re: is it possible to make this faster? |
Previous Message | Ragnar | 2006-05-26 16:23:37 | Re: column totals |