From: | Sven Willenberger <sven(at)dmv(dot)com> |
---|---|
To: | Oliver Crosby <ryusei(at)gmail(dot)com> |
Cc: | Dawid Kuroczko <qnex42(at)gmail(dot)com>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, jd(at)commandprompt(dot)com, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Looking for tips |
Date: | 2005-07-19 20:46:01 |
Message-ID: | 1121805961.3674.25.camel@lanshark.dmv.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, 2005-07-19 at 16:28 -0400, Oliver Crosby wrote:
> > If it is possible try:
> > 1) wrapping many inserts into one transaction
> > (BEGIN;INSERT;INSERT;...INSERT;COMMIT;). As PostgreSQL will need to
> > handle less transactions per second (each your insert is a transaction), it
> > may work faster.
>
> Aye, that's what I have it doing right now. The transactions do save a
> HUGE chunk of time. (Cuts it down by about 40%).
>
> > 2) If you can do 1, you could go further and use a COPY command which is
> > the fastest way to bulk-load a database.
>
> I don't think I can use COPY in my case because I need to do
> processing on a per-line basis, and I need to check if the item I want
> to insert is already there, and if it is, I need to get it's ID so I
> can use that for further processing.
>
since triggers work with COPY, you could probably write a trigger that
looks for this condition and does the ID processsing you need; you could
thereby enjoy the enormous speed gain resulting from COPY and maintain
your data continuity.
Sven
From | Date | Subject | |
---|---|---|---|
Next Message | Oliver Crosby | 2005-07-19 21:04:04 | Re: Looking for tips |
Previous Message | Christopher Weimann | 2005-07-19 20:30:04 | Re: Looking for tips |