From: | Grupos <grupos(at)carvalhaes(dot)net> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Improve BULK insertion |
Date: | 2004-12-04 13:39:39 |
Message-ID: | 41B1BE1B.10301@carvalhaes.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi !
I need to insert 500.000 records on a table frequently. It´s a bulk
insertion from my applicatoin.
I am with a very poor performance. PostgreSQL insert very fast until the
tuple 200.000 and after it the insertion starts to be really slow.
I am seeing on the log and there is a lot of transaction logs, something
like :
2004-12-04 11:08:59 LOG: recycled transaction log file "0000000600000012"
2004-12-04 11:08:59 LOG: recycled transaction log file "0000000600000013"
2004-12-04 11:08:59 LOG: recycled transaction log file "0000000600000011"
2004-12-04 11:14:04 LOG: recycled transaction log file "0000000600000015"
2004-12-04 11:14:04 LOG: recycled transaction log file "0000000600000014"
2004-12-04 11:19:08 LOG: recycled transaction log file "0000000600000016"
2004-12-04 11:19:08 LOG: recycled transaction log file "0000000600000017"
2004-12-04 11:24:10 LOG: recycled transaction log file "0000000600000018"
How can I configure PostgreSQL to have a better performance on this bulk
insertions ? I already increased the memory values.
My data:
Conectiva linux kernel 2.6.9
PostgreSQL 7.4.6 - 1,5gb memory
max_connections = 30
shared_buffers = 30000
sort_mem = 32768
vacuum_mem = 32768
max_fsm_pages = 30000
max_fsm_relations = 1500
The other configurations are default.
Cheers,
Rodrigo Carvalhaes
From | Date | Subject | |
---|---|---|---|
Next Message | Patrick Hatcher | 2004-12-04 14:47:17 | Re: Improve BULK insertion |
Previous Message | Sven Willenberger | 2004-12-04 03:46:45 | Overhead of dynamic query in trigger |