From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | Josué Maldonado <josue(at)lamundial(dot)hn> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: update slow |
Date: | 2003-11-11 21:35:28 |
Message-ID: | 200311112135.28131.dev@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tuesday 11 November 2003 20:55, Josué Maldonado wrote:
> Hi Richard,
>
> Here are both ways:
Excellent - OK, here's a crash course in understanding the output of EXPLAIN
ANALYSE...
> dbmund=# EXPLAIN ANALYSE update estprod set
> dbmund-# epr_costo=(select tcos from cost2est2003 where code=pro_code
> and mes=epr_periodo)
> dbmund-# where epr_periodo='200311';
>
> Seq Scan on estprod (cost=0.00..9177.91 rows=8080 width=163) (actual
> time=440.99..627139.23 rows=6867 loops=1)
PG predicted 8080 rows to update (each of 163 bytes). It was actually 6867
rows with a total time of 627139.23 milliseconds (over 10 minutes).
> Filter: (epr_periodo = '200311'::bpchar)
> SubPlan
> -> Index Scan using c2emes on cost2est2003 (cost=0.00..1532.53
> rows=2 width=16) (actual time=66.60..91.19 rows=1 loops=6867)
Each sub-select took 91.19ms but looping through 6867 calls, that's 626
seconds - almost all of your query time.
> Index Cond: (mes = $1)
> Filter: (code = $0)
> Total runtime: 628557.56 msec
The only thing that might speed it up is to build an index on (mes,code) for
cost2est2003, and even then I wouldn't expect wonders.
> dbmund=# explain analyze update estprod set
> dbmund-# epr_costo= tcos
> dbmund-# from cost2est2003
> dbmund-# where code=pro_code and mes=epr_periodo
> dbmund-# and epr_periodo='200311';
>
> Merge Join (cost=10080.76..15930.98 rows=316 width=197) (actual
> time=1191.89..4704.49 rows=6851 loops=1)
> Merge Cond: ("outer".code = "inner".pro_code)
> Join Filter: ("outer".mes = "inner".epr_periodo)
> -> Index Scan using c2ecode on cost2est2003 (cost=0.00..4614.85
> rows=99350 width=34) (actual time=0.16..2974.96 rows=99350 loops=1)
> -> Sort (cost=10080.76..10100.96 rows=8080 width=163) (actual
> time=1191.62..1235.32 rows=55216 loops=1)
> Sort Key: estprod.pro_code
> -> Seq Scan on estprod (cost=0.00..9177.91 rows=8080
> width=163) (actual time=396.88..1126.28 rows=6867 loops=1)
> Filter: (epr_periodo = '200311'::bpchar)
> Total runtime: 5990.34 msec
> (9 rows)
>
> Wow, update from is pretty faster than the first update, can't
> understand why.
In the second example, PG gathers the target rows in estprod sorts them and
then joins using your index on c2ecode. This is clearly a better plan. Note
that since the rows from the previous test could still be in memory, you'd
expect an improvement anyway, but this plan just looks better.
PG obviously can't figure out that it can convert the first example into the
second, but I'm guessing there are good reasons for that. Although your
example is simple enough, there might be cases where it isn't safe to do so.
Presumably this is one reason why the non-standard UPDATE...FROM statement is
there.
Try the second form with your real query, VACUUM ANALYSE both tables and
perhaps try an index on (mes,code) and see what that does for you. If you get
any more performance issues, there is a mailing list specifically for them -
plenty of knowledgable types there.
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Joe Conway | 2003-11-11 22:01:06 | Re: Proposal for a cascaded master-slave replication system |
Previous Message | Josué Maldonado | 2003-11-11 20:55:49 | Re: update slow |