Re: joining two tables slow due to sequential scan

From: Scott Marlowe <smarlowe(at)g2switchworks(dot)com>
To: Tim Jones <TJones(at)optio(dot)com>
Cc: Dave Dutcher <dave(at)tridecap(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: joining two tables slow due to sequential scan
Date: 2006-02-10 22:46:14
Message-ID: 1139611574.22740.154.camel@state.g2switchworks.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Fri, 2006-02-10 at 16:43, Tim Jones wrote:
> oops
>
> QUERY PLAN
> 'Hash Join (cost=899.83..4384.17 rows=482 width=1350) (actual
> time=0.203..0.203 rows=0 loops=1)'
> ' Hash Cond: ("outer".documentidentifier =
> "inner".dssdocumentidentifier)'
> ' -> Seq Scan on documentversions (cost=0.00..2997.68 rows=96368
> width=996) (actual time=0.007..0.007 rows=1 loops=1)'
> ' -> Hash (cost=898.62..898.62 rows=482 width=354) (actual
> time=0.161..0.161 rows=0 loops=1)'
> ' -> Bitmap Heap Scan on clinicaldocuments (cost=4.69..898.62
> rows=482 width=354) (actual time=0.159..0.159 rows=0 loops=1)'
> ' Recheck Cond: (patientidentifier = 123)'
> ' -> Bitmap Index Scan on ix_cdocpid (cost=0.00..4.69
> rows=482 width=0) (actual time=0.153..0.153 rows=0 loops=1)'
> ' Index Cond: (patientidentifier = 123)'
> 'Total runtime: 0.392 ms'
>
> note I have done these on a smaller db than what I am using but the
> plans are the same

Hmmmm. We really need to see what's happening on the real database to
see what's going wrong. i.e. if the real database thinks it'll get 30
rows and it gets back 5,000,000 that's a problem.

The query planner in pgsql is cost based, so until you have real data
underneath it, and analyze it, you can't really say how it will behave
for you. I.e. small test sets don't work.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2006-02-10 22:51:50 Re: joining two tables slow due to sequential scan
Previous Message Tom Lane 2006-02-10 22:44:08 Re: joining two tables slow due to sequential scan