Re: huge price database question..

From: Michael Nolan <htfoot(at)gmail(dot)com>
To: Jim Green <student(dot)northwestern(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: huge price database question..
Date: 2012-03-20 23:45:20
Message-ID: CAOzAquKmRNcfiq_ubC7S211cP_+vOy3NZinpt4DmNj3uBTJcRg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

>
> right now I am having about 7000 tables for individual stock and I use
> perl to do inserts, it's very slow. I would like to use copy or other
> bulk loading tool to load the daily raw gz data. but I need the split
> the file to per stock files first before I do bulk loading. I consider
> this a bit messy.

Are you committing each insert separately or doing them in batches using
'begin transaction' and 'commit'?

I have a database that I do inserts in from a text file. Doing a commit
every 1000 transactions cut the time by over 90%.
--
Mike Nolan

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2012-03-21 00:06:22 Re: Index on System Table
Previous Message Jim Green 2012-03-20 23:27:16 huge price database question..