From: | "Morgan" <mkita(at)verseon(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Insert slow down on empty database |
Date: | 2005-06-03 23:13:21 |
Message-ID: | d7qo6e$fq$1@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi,
I am having a problem with inserting a large amount of data with my libpqxx
program into an initially empty database. It appears to be the EXACT same
problem discussed here:
http://archives.postgresql.org/pgsql-bugs/2005-03/msg00183.php
In fact my situation is nearly identical, with roughly 5 major tables, with
foreign keys between each other. All the tables are being loaded into
similtaneously with about 2-3 million rows each. It seems that the problem
is caused by the fact that I am using prepared statments, that cause the
query planner to choose sequential scans for the foreign key checks due to
the table being initially empty. As with the post above, if I dump my
connection after about 4000 inserts, and restablish it the inserts speed up
by a couple of orders of magnitude and remain realtively constant through
the whole insertion.
At first I was using straight insert statments, and although they were a bit
slower than the prepared statments(after the restablished connection) they
never ran into this problem with the database being initially empty. I only
changed to the prepared statements because it was suggested in the
documentation for advice on bulk data loads =).
I can work around this problem, and I am sure somebody is working on fixing
this, but I thought it might be good to reaffirm the problem.
Thanks,
Morgan Kita
From | Date | Subject | |
---|---|---|---|
Next Message | William Yu | 2005-06-03 23:22:49 | Re: Forcing use of specific index |
Previous Message | Joshua D. Drake | 2005-06-03 22:03:17 | Re: Postgresql and Software RAID/LVM |