From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Cath Lawrence <Cath(dot)Lawrence(at)anu(dot)edu(dot)au> |
Cc: | pgsql-novice(at)postgresql(dot)org, pygresql(at)Vex(dot)Net |
Subject: | Re: Loading speed |
Date: | 2003-10-22 14:35:13 |
Message-ID: | 11333.1066833313@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
Cath Lawrence <Cath(dot)Lawrence(at)anu(dot)edu(dot)au> writes:
> The atom table is a bit like this with some language changes:
> CREATE TABLE atom (
> id INTEGER PRIMARY KEY DEFAULT
> nextval('atom_id_seq'),
> protein_id CHAR(4) REFERENCES protein,
> chain_id CHAR(1) NOT NULL,
> amino_acid_id INTEGER REFERENCES amino_acid,
> atom_type TEXT,
> element TEXT,
> x FLOAT,
> y FLOAT,
> z FLOAT
> );
> With my simple-minded pygresql script, I have one insert statement per
> atom, and of course it crawls very feebly. Perhaps there's an
> insert_many method I haven't found yet (the docs suck, I will keep
> looking).
> Or perhaps I should write them out to a text file and use COPY?
COPY would be better than individual INSERTs, if you can bunch a
reasonably large number of rows into each COPY (the point being to cut
down per-statement overhead, of course). You could perhaps avoid the
use of an intermediate file, if pygresql supports COPY FROM STDIN
(I don't know if it does).
However, I suspect the real problem is with poor performance of the
REFERENCES checks. What is likely happening is that the planner thinks
the "protein" and "amino_acid" tables are small, and is generating plans
for the checks that only work well for small tables. Once you get a
respectable amount of data into the referenced tables, the checks take
forever.
If you can load the "protein" and "amino_acid" tables first, then the
answer is simple --- do that, then VACUUM and/or ANALYZE those tables,
then start on the "atom" table. If you want to load them in parallel
then I think your best bet is to drop and recreate all three tables
before you start loading, and do *not* run VACUUM or ANALYZE while the
tables are empty. The default planner assumptions if no VACUUM or
ANALYZE has happened should give tolerable plans.
Another thing you can think about is to not do the REFERENCES checks
while loading data at all; that is, create the tables without
REFERENCES, load all the data, ANALYZE, then install the references
clauses with ALTER TABLE ADD FOREIGN KEY. However this is only really
a good idea if you're pretty certain the initial data is clean ---
if you have any referential check failures, tracking them down will be
painful. (7.4 would tell you the failing key value, but I don't think
7.3 does.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | big_mafa | 2003-10-22 15:11:53 | How to deal with smaller xlogs? |
Previous Message | Oliver Elphick | 2003-10-22 07:58:12 | Re: Loading speed |