Re: Optimizing large data loads

From: "John Wells" <jb(at)sourceillustrated(dot)com>
To: "Richard Huxton" <dev(at)archonet(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Optimizing large data loads
Date: 2005-08-06 13:58:07
Message-ID: 52979.172.16.3.2.1123336687.squirrel@devsea.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Richard Huxton said:
> You don't say what the limitations of Hibernate are. Usually you might
> look to:
> 1. Use COPY not INSERTs

Not an option, unfortunately.

> 2. If not, block INSERTS into BEGIN/COMMIT transactions of say 100-1000

We're using 50/commit...we can easily up this I suppose.

> 3. Turn fsync off

Done.

> 4. DROP/RESTORE constraints/triggers/indexes while you load your data

Hmmm...will have to think about this a bit...not a bad idea but not sure
how we can make it work in our situation.

> 5. Increase sort_mem/work_mem in your postgresql.conf when recreating
> indexes etc.
> 6. Use multiple processes to make sure the I/O is maxed out.

5. falls in line with 4. 6. is definitely doable.

Thanks for the suggestions!

John

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2005-08-06 14:06:35 Re: timestamp default values
Previous Message John DeSoi 2005-08-06 12:08:54 Re: Postgresql Hosting