From: | Richard Broersma Jr <rabroersma(at)yahoo(dot)com> |
---|---|
To: | Nikola Milutinovic <alokin1(at)yahoo(dot)com>, Chad Wagner <chad(dot)wagner(at)gmail(dot)com> |
Cc: | PostgreSQL general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: slow speeds after 2 million rows inserted |
Date: | 2006-12-31 17:00:50 |
Message-ID: | 695013.37136.qm@web31811.mail.mud.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> It was really some time ago, since I have experimented with this. My las experiment was on PG
> 7.2 or 7.3. I was inserting cca 800,000 rows. Inserting without transactions took 25 hrs.
> Inserting with 10,000 rows per transaction took about 2.5 hrs. So, the speedup was 10x. I have
> not experimented with the transaction batch size, but I suspect that 1,000 would not show much
> speedup.
>
> > 2. Vacuuming also makes no difference for a heavy insert-only table, only slows it down.
>
> Makes sense. Since my application was dumping all records each month and inserting new ones,
> vacuum was really needed, but no speedup.
>
> > 3. Table size plays no real factor.
Maybe this link my be useful, it contains additional links to various postgresql preformance
test.
http://archives.postgresql.org/pgsql-general/2006-10/msg00662.php
Regards,
Richard Broersma Jr.
From | Date | Subject | |
---|---|---|---|
Next Message | Chad Wagner | 2006-12-31 17:08:12 | Re: slow speeds after 2 million rows inserted |
Previous Message | Martijn van Oosterhout | 2006-12-31 16:54:55 | Re: Generic timestamp function for updates where field |