From: | Jim Green <student(dot)northwestern(at)gmail(dot)com> |
---|---|
To: | David Kerr <dmk(at)mr-paradox(dot)net> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: huge price database question.. |
Date: | 2012-03-21 02:26:53 |
Message-ID: | CACAe89wHWfDe56OAVRyNYqanjWKxgP2DY6jCdsPojHWr8Oi8KA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 20 March 2012 22:21, David Kerr <dmk(at)mr-paradox(dot)net> wrote:
> I'm imagining that you're loading the raw file into a temporary table that
> you're going to use to
> process / slice new data data into your 7000+ actual tables per stock.
Thanks! would "slice new data data into your 7000+ actual tables per
stock." be a relatively quick operation?
>
> So that table doesn't probably need to be around once you've processed your
> stocks through
> that table. so you could just truncate/drop it after you're done.
>
> When you create it, if you avoid indexes the inserts will be faster (it
> doesn't have to rebuild the index every
> insert) so then once the table is loaded, you create the indexes (So it's
> actually useful) and then process the
> data into the various stock tables.
>
> Dave
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Green | 2012-03-21 02:30:16 | Re: huge price database question.. |
Previous Message | Andy Colson | 2012-03-21 02:25:10 | Re: huge price database question.. |