From: | Mark kirkwood <markir(at)slingshot(dot)co(dot)nz> |
---|---|
To: | David Link <dlink(at)soundscan(dot)com> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Performance problem with 50,000,000 rows |
Date: | 2001-10-11 04:57:59 |
Message-ID: | 01101117575900.01048@spikey.slithery.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Previously :
>
>Table sizes .....
>
>bk_inv : 46,790,877
>bk_title: 2,311,710
>
>Query :
>
>select i.isbn,
> t.vendor,
> i.store,
> i.qty
>from bk_inv i,
> bk_title t
>where i.isbn = t.isbn
>and t.vendor = '01672708' ;
>
It might be worth putting and index on bk_title.vendor, then the scan of this
table will quickly find the appropriate rows( assuming vendor is
selective)... then the join to bk_inv can use the bk_inv.isbn index as
before, but hopefully with a smaller dataset....(so hopefully it might be
able to use a nested loop from bk_title -> bk_inv instead of a huge merge
join)
It also might be worth playing with sort_mem and shared_buffers (if you have
not done so already) - since you have a reasonable amount of memory.
good luck
Mark
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2001-10-11 05:21:25 | Re: Postgres server locks up |
Previous Message | Thirumoorthy Bhuvneswari | 2001-10-11 03:51:52 | Error in installing postgresql-7.1.2 |