Inserting 26 million rows takes 8 hours, how to improve those times?

From: Jose Vicente Nunez Z <josevnz(at)newbreak(dot)com>
To: pgsql-admin(at)postgresql(dot)org
Subject: Inserting 26 million rows takes 8 hours, how to improve those times?
Date: 2003-09-16 13:01:14
Message-ID: 1063717273.5407.16.camel@linux0037
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Greetings,

I'm trying to use PostgreSQL to manage big amounts of data; One of the
first things i'm testing is how fast PostgreSQL can load some big CSV
text files.

For that i'm using the PostgreSQL copy tool, but the problem is that is
taking almost 9 hours to load the data:

copy nb_cmo_deal_pools from '/postgres-system/datafile.txt' DELIMITERS
'|';

[root(at)linux0105 root]# time psql -Upostgres MYDB < load.sql

real 487m47.632s
user 0m0.020s
sys 0m0.000s
[root(at)linux0105 root]#

[root(at)linux0105 root]# cat
/postgres-system/datafile.txt|wc -l
26026965

I've already played with filesystem options on my ext3 system and i would like to know:

1) How i can tune PostgreSQL to improve the insertion speed (besides droping indexes and using the fsync option).
2) How 'safe' is to use the fsync=off option on a ext3 journaled system? The journal should give me some protection in case the system goes down, isn't it?
3) I've read several web pages that talk about tunning the sort and buffer options in PosgreSQL, but no good example of optimizations here. Does anyone know where i can find more help (My test system has 2GB of RAM
and i think 1.5 for the database will be fair enough).

I apologize if this not the proper place to post this questions.

Thanks in advance,

--
Jose Vicente Nunez Zuleta (josevnz at newbreak dot com)
Newbreak LLC System Administrator
http://www.newbreak.com
RHCE, SCJD, SCJP

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message jean-philippe.gois 2003-09-16 13:14:25 Re: postgresql headers
Previous Message EOsorio 2003-09-16 12:33:45 dbArtisan