From: | PFC <lists(at)boutiquenumerique(dot)com> |
---|---|
To: | "Richard van den Berg" <richard(dot)vandenberg(at)trust-factory(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Foreign key slows down copy/insert |
Date: | 2005-04-15 10:22:52 |
Message-ID: | op.so9q4exeth1vuj@localhost |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> Index Scan using ix_B on B (cost=0.04..3.06 rows=1 width=329) (actual
> time=93.824..93.826 rows=1 loops=1)
> Index Cond: (id = $0)
> InitPlan
> -> Limit (cost=0.00..0.04 rows=1 width=4) (actual
> time=15.128..15.129 rows=1 loops=1)
> -> Seq Scan on A (cost=0.00..47569.70 rows=1135570
> width=4) (actual time=15.121..15.121 rows=1 loops=1)
> Total runtime: 94.109 ms
94 ms for an index scan ?
this look really slow...
was the index in the RAM cache ? does it fit ? is it faster the second
time ? If it's still that slow, something somewhere is severely screwed.
B has 150K rows you say, so everything about B should fit in RAM, and you
should get 0.2 ms for an index scan, not 90 ms !
Try this :
Locate the files on disk which are involved in table B (table + indexes)
looking at the system catalogs
Look at the size of the files. Is the index severely bloated ? REINDEX ?
DROP/Recreate the index ?
Load them into the ram cache (just cat files | wc -b several times until
it's almost instantaneous)
Retry your query and your COPY
I know it's stupid... but it's a lot faster to load an index in the cache
by plainly reading the file rather than accessing it randomly.
(even though, with this number of rows, it should not be THAT slow !)
From | Date | Subject | |
---|---|---|---|
Next Message | Marinos Yannikos | 2005-04-15 11:16:10 | Re: Intel SRCS16 SATA raid? |
Previous Message | PFC | 2005-04-15 10:13:59 | Re: How to improve db performance with $7K? |