From: | "Gary Doades" <gpd(at)gpdnet(dot)co(dot)uk> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: planner/optimizer question |
Date: | 2004-04-29 21:15:12 |
Message-ID: | 40917E70.21362.14F4A330@localhost |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 29 Apr 2004 at 13:54, Josh Berkus wrote:
> Gary,
>
>
> It's also quite possble the MSSQL simply has more efficient index scanning
> implementation that we do. They've certainly had incentive; their storage
> system sucks big time for random lookups and they need those fast indexes.
> (just try to build a 1GB adjacency list tree on SQL Server. I dare ya).
>
> Certainly the fact that MSSQL is essentially a single-user database makes
> things easier for them. They don't have to maintain multiple copies of the
> index tuples in memory. I think that may be our main performance loss.
>
Possibly, but MSSQL certainly uses data from indexes and cuts out the
subsequent (possibly random seek) data fetch. This is also why the
"Index Tuning Wizard" often recommends multi column compound
indexes in some cases. I've tried these recommendations on occasions
and they certainly speed up the selects significantly. If anyhing the index
scan on the new compound index must be slower then the original single
column index and yet it still gets the data faster.
This indicates to me that it is not the scan (or IO) performance that is
making the difference, but not having to go get the data row.
Cheers,
Gary.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-04-29 21:54:31 | Re: planner/optimizer question |
Previous Message | Vitaly Belman | 2004-04-29 21:09:36 | Re: Simply join in PostrgeSQL takes too long |