Re: huge price database question..

From: Jim Green <student(dot)northwestern(at)gmail(dot)com>
To: Michael Nolan <htfoot(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: huge price database question..
Date: 2012-03-21 01:03:47
Message-ID: CACAe89x_XSh=ZYZA9esjXCUQE5T3PLXKph5HgSFsat8DeDSWyQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 20 March 2012 19:45, Michael Nolan <htfoot(at)gmail(dot)com> wrote:
>
>>
>> right now I am having about 7000 tables for individual stock and I use
>> perl to do inserts, it's very slow. I would like to use copy or other
>> bulk loading tool to load the daily raw gz data. but I need the split
>> the file to per stock files first before I do bulk loading. I consider
>> this a bit messy.
>
>
> Are you committing each insert separately or doing them in batches using
> 'begin transaction' and 'commit'?
>
> I have a database that I do inserts in from a text file. Doing a commit
> every 1000 transactions cut the time by over 90%.

I use perl dbi and prepared statement. also I set
shared_buffers = 4GB
work_mem = 1GB
synchronous_commit = off
effective_cache_size = 8GB
fsync=off
full_page_writes = off

when I do the insert.

Thanks!

> --
> Mike Nolan

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Jim Green 2012-03-21 01:16:07 Re: huge price database question..
Previous Message Steve Crawford 2012-03-21 00:19:40 Re: huge price database question..