From: | Rodrigo Gonzalez <rjgonzale(at)gmail(dot)com> |
---|---|
To: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com> |
Cc: | James Neff <jneff(at)tethyshealth(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: slow speeds after 2 million rows inserted |
Date: | 2006-12-29 18:30:21 |
Message-ID: | 45955EBD.10601@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Joshua D. Drake wrote:
> On Fri, 2006-12-29 at 13:21 -0500, James Neff wrote:
>> Joshua D. Drake wrote:
>>> Also as you are running 8.2 you can use multi valued inserts...
>>>
>>> INSERT INTO data_archive values () () ()
>>>
>> Would this speed things up? Or is that just another way to do it?
>
> The fastest way will be copy.
> The second fastest will be multi value inserts in batches.. eg.;
>
> INSERT INTO data_archive values () () () (I don't knwo what the max is)
>
> but commit every 1000 inserts or so.
>
> Sincerely,
>
> Joshua D. Drake
>
>
>> Thanks,
>> James
>>
>>
>> ---------------------------(end of broadcast)---------------------------
>> TIP 6: explain analyze is your friend
>>
Another thing....if you have to make validations and so on....creating
an temp file with the information validated and using COPY will be
faster. So you can validate and use COPY at the same time....
From | Date | Subject | |
---|---|---|---|
Next Message | Bob Pawley | 2006-12-29 18:54:10 | Re: Backup Restore |
Previous Message | Joshua D. Drake | 2006-12-29 18:28:39 | Re: slow speeds after 2 million rows inserted |