From: | Samuel Gendler <sgendler(at)ideasculptor(dot)com> |
---|---|
To: | Adarsh Sharma <adarsh(dot)sharma(at)orkash(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Need to tune for Heavy Write |
Date: | 2011-08-04 17:40:07 |
Message-ID: | CAEV0TzD7==4mmNXA2a_pM3o2aw1BmD2pNg+OFkM356webohPQw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Aug 3, 2011 at 9:56 PM, Adarsh Sharma <adarsh(dot)sharma(at)orkash(dot)com>wrote:
> Dear all,
>
> From the last few days, I researched a lot on Postgresql Performance Tuning
> due to slow speed of my server.
> My application selects data from mysql database about 100000 rows , process
> it & insert into postgres 2 tables by making about 45 connections.
It's already been mentioned, but is worth reinforcing, that if you are
inserting 100,000 rows in 100,000 transactions, you'll see a huge
performance improvement by doing many more inserts per transaction. Try
doing at least 500 inserts in each transaction (though you can possibly go
quite a bit higher than that without any issues, depending upon what other
traffic the database is handling in parallel). You almost certainly don't
need 45 connections in order to insert only 100,000 rows. I've got a crappy
VM with 2GB of RAM in which inserting 100,000 relatively narrow rows
requires less than 10 seconds if I do it in a single transaction on a single
connection. Probably much less than 10 seconds, but the code I just tested
with does other work while doing the inserts, so I don't have a pure test at
hand.
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Ayrapetyan | 2011-08-04 18:33:31 | Re: Performance die when COPYing to table with bigint PK |
Previous Message | Kevin Grittner | 2011-08-04 17:22:31 | Re: Performance die when COPYing to table with bigint PK |