From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Thomas F(dot)O'Connell" <tfo(at)sitening(dot)com> |
Cc: | PgSQL - Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Performance Anomalies in 7.4.5 |
Date: | 2004-10-21 21:53:19 |
Message-ID: | 25228.1098395599@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
"Thomas F.O'Connell" <tfo(at)sitening(dot)com> writes:
> -> Nested Loop (cost=0.00..0.01 rows=1 width=8) (actual
> time=1.771..298305.531 rows=2452 loops=1)
> Join Filter: ("inner".id = "outer".id)
> -> Seq Scan on userdata u (cost=0.00..0.00 rows=1 width=8)
> (actual time=0.026..11.869 rows=2452 loops=1)
> -> Seq Scan on userdata_history h (cost=0.00..0.00 rows=1
> width=8) (actual time=0.005..70.519 rows=41631 loops=2452)
> Filter: (id = 18181::bigint)
> Total runtime: 298321.926 ms
> (7 rows)
What's killing you here is that the planner thinks these tables are
completely empty (notice the zero cost estimates, which implies the
table has zero blocks --- the fact that the rows estimate is 1 and not 0
is the result of sanity-check clamping inside costsize.c). This leads
it to choose a nestloop, which would be the best plan if there were only
a few rows involved, but it degenerates rapidly when there are not.
It's easy to fall into this trap when truncating and reloading tables;
all you need is an "analyze" while the table is empty. The rule of
thumb is to analyze just after you reload the table, not just before.
I'm getting more and more convinced that we need to drop the reltuples
and relpages entries in pg_class, in favor of checking the physical
table size whenever we make a plan. We could derive the tuple count
estimate by having ANALYZE store a tuples-per-page estimate in pg_class
and then multiply by the current table size; tuples-per-page should be
a much more stable figure than total tuple count.
One drawback to this is that it would require an additional lseek per
table while planning, but that doesn't seem like a huge penalty.
Probably the most severe objection to doing things this way is that the
selected plan could change unexpectedly as a result of the physical
table size changing. Right now the DBA can keep tight rein on actions
that might affect plan selection (ie, VACUUM and ANALYZE), but that
would go by the board with this. OTOH, we seem to be moving towards
autovacuum, which also takes away any guarantees in this department.
In any case this is speculation for 8.1; I think it's too late for 8.0.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jim C. Nasby | 2004-10-21 22:11:00 | Re: Does PostgreSQL run with Oracle? |
Previous Message | Thomas F.O'Connell | 2004-10-21 21:11:39 | Re: Performance Anomalies in 7.4.5 |