From: | Slavisa Garic <Slavisa(dot)Garic(at)infotech(dot)monash(dot)edu(dot)au> |
---|---|
To: | Kevin Brown <kevin(at)sysexperts(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: COPY from question |
Date: | 2004-02-05 00:42:32 |
Message-ID: | Pine.GSO.4.10.10402051133020.8655-100000@bruce.csse.monash.edu.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
Hi Kevin,
On Tue, 3 Feb 2004, Kevin Brown wrote:
> Slavisa Garic wrote:
> > Using pg module in python I am trying to run the COPY command to populate
> > the large table. I am using this to replace the INSERT which takes about
> > few hours to add 70000 entries where copy takes minute and a half.
>
> That difference in speed seems quite large. Too large. Are you batching
> your INSERTs into transactions (you should be in order to get good
> performance)? Do you have a ton of indexes on the table? Does it have
> triggers on it or some other thing (if so then COPY may well wind up doing
> the wrong thing since the triggers won't fire for the rows it inserts)?
>
> I don't know what kind of schema you're using, but it takes perhaps a
> couple of hours to insert 2.5 million rows on my system. But the rows
> in my schema may be much smaller than yours.
You are right about the indexes. There is quite a few of them (5-6 without
looking at the schema). The problem is that I do need those indexes as I
have a lot of SELECTs on that table and inserts are only happening once.
You are also right about the rows (i think) as I have about 15-20 columns.
This could be split into few other table and it used to be but I have
merged them because of the requirement for the faster SELECTs. With the
current schema there most of my modules that access the database are not
required to do expensive JOINs as they used to. Because faster SELECTs are
more important to me then faster INSERTs I had to do this. THis wasn't a
problem for me until I have started creating experiments which had more
than 20 thousand jobs which translates to 20 thousand rows in this big
table.
I do batch INSERTs into one big transaction (1000 rows at a time). While i
did get some improvement compared to the single transaction per insert it
was still not fast enough (well not for me :) ). Could you please
elaborate on the triggers? I have no idea what kind of triggers there are
in PGSQL or relational databases.
With regards to my problem, I did solve it by piping the data into the
COPY stdin. Now I have about 75000 rows inserted in 40 seconds which is
extremely good for me.
Thank you for your help,
Regards,
Slavisa
> --
> Kevin Brown kevin(at)sysexperts(dot)com
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2004-02-05 05:24:37 | Re: PITR Dead horse? |
Previous Message | Nicolai Tufar | 2004-02-05 00:00:52 | Re: PITR Dead horse? |
From | Date | Subject | |
---|---|---|---|
Next Message | Orion Henry | 2004-02-05 05:16:37 | 7.3 vs 7.4 performance |
Previous Message | Corey Edwards | 2004-02-04 23:22:23 | Re: select is not using index? |