From: | Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com> |
---|---|
To: | Roman Fail <rfail(at)posportal(dot)com> |
Cc: | <josh(at)agliodbs(dot)com>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: 7.3.1 New install, large queries are slow |
Date: | 2003-01-16 18:43:02 |
Message-ID: | 20030116103622.K6828-100000@megazone23.bigpanda.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, 15 Jan 2003, Roman Fail wrote:
I just had new thoughts.
If you make an index on batchdetail(batchid)
does that help?
I realized that it was doing a merge join
to join d and the (t,b,m) combination when it
was expecting 3 rows out of the latter, and
batchid is presumably fairly selective on
the batchdetail table, right? I'd have expected
a nested loop over the id column, but it
doesn't appear you have an index on it in
batchdetail.
Then I realized that batchheader.batchid and
batchdetail.batchid don't even have the same
type, and that's probably something else you'd
need to fix.
> batchheader has 2.6 million records:
> CREATE TABLE public.batchheader (
> batchid int8 DEFAULT nextval('"batchheader_batchid_key"'::text) NOT NULL,
> And here's batchdetail too, just for kicks. 23 million records.
> CREATE TABLE public.batchdetail (
> batchid int4,
From | Date | Subject | |
---|---|---|---|
Next Message | Roman Fail | 2003-01-16 19:22:03 | Re: 7.3.1 New install, large queries are slow |
Previous Message | Tom Lane | 2003-01-16 18:40:32 | Re: 7.3.1 New install, large queries are slow |