Re: huge price database question..

From: David Kerr <dmk(at)mr-paradox(dot)net>
To: Jim Green <student(dot)northwestern(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: huge price database question..
Date: 2012-03-21 02:21:17
Message-ID: 4F693B1D.9070007@mr-paradox.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 03/20/2012 07:08 PM, Jim Green wrote:
> On 20 March 2012 22:03, David Kerr<dmk(at)mr-paradox(dot)net> wrote:
>
>> \copy on 1.2million rows should only take a minute or two, you could make
>> that table "unlogged"
>> as well to speed it up more. If you could truncate / drop / create / load /
>> then index the table each
>> time then you'll get the best throughput.
> Thanks, Could you explain on the "runcate / drop / create / load /
> then index the table each time then you'll get the best throughput."
> part.. or point me to some docs?..
>
> Jim

I'm imagining that you're loading the raw file into a temporary table
that you're going to use to
process / slice new data data into your 7000+ actual tables per stock.

So that table doesn't probably need to be around once you've processed
your stocks through
that table. so you could just truncate/drop it after you're done.

When you create it, if you avoid indexes the inserts will be faster (it
doesn't have to rebuild the index every
insert) so then once the table is loaded, you create the indexes (So
it's actually useful) and then process the
data into the various stock tables.

Dave

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message John R Pierce 2012-03-21 02:22:19 Re: huge price database question..
Previous Message Jim Green 2012-03-21 02:12:05 Re: huge price database question..