From: | Jim Green <student(dot)northwestern(at)gmail(dot)com> |
---|---|
To: | David Kerr <dmk(at)mr-paradox(dot)net> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: huge price database question.. |
Date: | 2012-03-21 02:12:05 |
Message-ID: | CACAe89yC1XoDfah1TZsZMn+tg1DROSOJtdA6gb4kL2oOU2q49g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 20 March 2012 22:08, Jim Green <student(dot)northwestern(at)gmail(dot)com> wrote:
> On 20 March 2012 22:03, David Kerr <dmk(at)mr-paradox(dot)net> wrote:
>
>> \copy on 1.2million rows should only take a minute or two, you could make
>> that table "unlogged"
>> as well to speed it up more. If you could truncate / drop / create / load /
>> then index the table each
>> time then you'll get the best throughput.
>
> Thanks, Could you explain on the "runcate / drop / create / load /
> then index the table each time then you'll get the best throughput."
> part.. or point me to some docs?..
Also if I use copy, I would be tempted to go the one table route, or
else I need to parse my raw daily file, separate to individual symbol
file and copy to individual table for each symbol(this sounds like not
very efficient)..
>
> Jim
>>
>> Dave
>>
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
From | Date | Subject | |
---|---|---|---|
Next Message | David Kerr | 2012-03-21 02:21:17 | Re: huge price database question.. |
Previous Message | Jim Green | 2012-03-21 02:08:48 | Re: huge price database question.. |