Re: Disk Performance Problem on Large DB

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>, "Jonathan Hoover" <jhoover(at)yahoo-inc(dot)com>
Subject: Re: Disk Performance Problem on Large DB
Date: 2010-11-04 21:03:35
Message-ID: 4CD2D95702000025000372EC@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

"Jonathan Hoover" <jhoover(at)yahoo-inc(dot)com> wrote:

> I have a simple database, with one table for now. It has 4
> columns:
>
> anid serial primary key unique,
> time timestamp,
> source varchar(5),
> unitid varchar(15),
> guid varchar(32)
>
> There is a btree index on each.
>
> I am loading data 1,000,000 (1M) rows at a time using psql and a
> COPY command. Once I hit 2M rows, my performance just drops out

Drop the indexes and the primary key before you copy in.
Personally, I strongly recommend a VACUUM FREEZE ANALYZE after the
bulk load. Then use ALTER TABLE to restore the primary key, and
create all the other indexes.

Also, if you don't mind starting over from initdb if it crashes
partway through you can turn fsync off. You want a big
maintenance_work_mem setting during the index builds -- at least
200 MB.

-Kevin

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Dimitri Fontaine 2010-11-04 21:14:43 Re: Installation Questions (FreeBSD / Windows / Postgres 9)
Previous Message Kenneth Marshall 2010-11-04 21:02:50 Re: Disk Performance Problem on Large DB