From: | Mark Lewis <mark(dot)lewis(at)mir3(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Merlin Moncure <mmoncure(at)gmail(dot)com>, Postgresql Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: is it possible to make this faster? |
Date: | 2006-05-25 21:26:23 |
Message-ID: | 1148592384.9750.13.camel@archimedes |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, 2006-05-25 at 16:52 -0400, Tom Lane wrote:
> "Merlin Moncure" <mmoncure(at)gmail(dot)com> writes:
> > been doing a lot of pgsql/mysql performance testing lately, and there
> > is one query that mysql does much better than pgsql...and I see it a
> > lot in normal development:
>
> > select a,b,max(c) from t group by a,b;
>
> > t has an index on a,b,c.
>
> The index won't help, as per this comment from planagg.c:
>
> * We don't handle GROUP BY, because our current implementations of
> * grouping require looking at all the rows anyway, and so there's not
> * much point in optimizing MIN/MAX.
>
> Given the numbers you mention (300k rows in 2000 groups) I'm not
> convinced that an index-based implementation would help much; we'd
> still need to fetch at least one record out of every 150, which is
> going to cost near as much as seqscanning all of them.
Well, if the MySQL server has enough RAM that the index is cached (or
index + relevant chunks of data file if using InnoDB?) then that would
explain how MySQL can use an index to get fast results.
-- Mark Lewis
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Nasby | 2006-05-25 22:13:07 | Re: Optimizing a huge_table/tiny_table join |
Previous Message | Scott Marlowe | 2006-05-25 21:15:29 | Re: is it possible to make this faster? |