From: | Oliver Crosby <ryusei(at)gmail(dot)com> |
---|---|
To: | Dawid Kuroczko <qnex42(at)gmail(dot)com> |
Cc: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, jd(at)commandprompt(dot)com, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Looking for tips |
Date: | 2005-07-19 20:28:26 |
Message-ID: | 1efd553a050719132836c31b78@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> If it is possible try:
> 1) wrapping many inserts into one transaction
> (BEGIN;INSERT;INSERT;...INSERT;COMMIT;). As PostgreSQL will need to
> handle less transactions per second (each your insert is a transaction), it
> may work faster.
Aye, that's what I have it doing right now. The transactions do save a
HUGE chunk of time. (Cuts it down by about 40%).
> 2) If you can do 1, you could go further and use a COPY command which is
> the fastest way to bulk-load a database.
I don't think I can use COPY in my case because I need to do
processing on a per-line basis, and I need to check if the item I want
to insert is already there, and if it is, I need to get it's ID so I
can use that for further processing.
From | Date | Subject | |
---|---|---|---|
Next Message | PFC | 2005-07-19 20:28:44 | Re: Looking for tips |
Previous Message | Dawid Kuroczko | 2005-07-19 20:19:03 | Re: Looking for tips |