From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "David Witham" <davidw(at)unidial(dot)com(dot)au> |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: Indexes and statistics |
Date: | 2004-02-18 05:10:05 |
Message-ID: | 20690.1077081005@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
"David Witham" <davidw(at)unidial(dot)com(dot)au> writes:
> One of the customers is quite large (8.3% of the records):
Hmm. Unless your rows are quite wide, a random sampling of 8.3% of the
table would be expected to visit every page of the table, probably
several times. So the planner's cost estimates do not seem out of line
to me; an indexscan *should* be slow. The first question to ask is why
the deviation from reality. Are the rows for that customer ID likely to
be physically concentrated into a limited number of physical pages?
Do you have so much RAM that the whole table got swapped in, eliminating
the extra I/O that the planner is expecting?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Kumar | 2004-02-18 05:13:59 | Disabling constraints |
Previous Message | David Witham | 2004-02-18 04:43:07 | Indexes and statistics |