Re: Need to tune for Heavy Write

From: Craig Ringer <ringerc(at)ringerc(dot)id(dot)au>
To: Adarsh Sharma <adarsh(dot)sharma(at)orkash(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Need to tune for Heavy Write
Date: 2011-08-04 13:49:54
Message-ID: 4E3AA382.9050302@ringerc.id.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 4/08/2011 12:56 PM, Adarsh Sharma wrote:
> Dear all,
>
> From the last few days, I researched a lot on Postgresql Performance
> Tuning due to slow speed of my server.
> My application selects data from mysql database about 100000 rows ,
> process it & insert into postgres 2 tables by making about 45 connections.

Why 45?

Depending on your disk subsystem, that may be way too many for optimum
throughput. Or too few, for that matter.

Also, how are you doing your inserts? Are they being done in a single
big transaction per connection, or at least in resonable chunks? If
you're doing stand-alone INSERTs autocommit-style you'll see pretty
shoddy performance.

Have you looked into using COPY to bulk load your data? Possibly using
the libpq or jdbc copy APIs, or possibly using server-side COPY?

> fsync=off full_page_writes=off synchronous_commit=off

!!!!

I hope you don't want to KEEP that data if you have a hardware fault or
power loss. Setting fsync=off is pretty much saying "I don't mind if you
eat my data".

Keep. Really. Really. Good. Backups.

--
Craig Ringer

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Kevin Grittner 2011-08-04 13:57:30 Re: Need to tune for Heavy Write
Previous Message Nassib Nassar 2011-08-04 13:40:08 Seq Scan vs. Index Scan