Re: huge price database question..

From: David Kerr <dmk(at)mr-paradox(dot)net>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: huge price database question..
Date: 2012-03-21 03:24:22
Message-ID: 4F6949E6.8030703@mr-paradox.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 03/20/2012 07:26 PM, Jim Green wrote:
> On 20 March 2012 22:21, David Kerr<dmk(at)mr-paradox(dot)net> wrote:
>
>> I'm imagining that you're loading the raw file into a temporary table that
>> you're going to use to
>> process / slice new data data into your 7000+ actual tables per stock.
>
> Thanks! would "slice new data data into your 7000+ actual tables per
> stock." be a relatively quick operation?

well, it solves the problem of having to split up the raw file by stock
symbol. From there you can run multiple jobs in parallel to load
individual stocks into their individual table which is probably faster
than what you've got going now.

It would probably be faster to load the individual stocks directly from
the file but then, as you said, you have to split it up first, so that
may take time.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Jim Green 2012-03-21 03:28:20 Re: huge price database question..
Previous Message Andy Colson 2012-03-21 03:14:21 Re: huge price database question..