| From: | Jim Green <student(dot)northwestern(at)gmail(dot)com> |
|---|---|
| To: | David Kerr <dmk(at)mr-paradox(dot)net> |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: huge price database question.. |
| Date: | 2012-03-21 02:08:48 |
| Message-ID: | CACAe89wD0VXmxRWSr_jxxT9a9Mgn-tDhoP6JYYc_ZQHX2SEkOQ@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On 20 March 2012 22:03, David Kerr <dmk(at)mr-paradox(dot)net> wrote:
> \copy on 1.2million rows should only take a minute or two, you could make
> that table "unlogged"
> as well to speed it up more. If you could truncate / drop / create / load /
> then index the table each
> time then you'll get the best throughput.
Thanks, Could you explain on the "runcate / drop / create / load /
then index the table each time then you'll get the best throughput."
part.. or point me to some docs?..
Jim
>
> Dave
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Jim Green | 2012-03-21 02:12:05 | Re: huge price database question.. |
| Previous Message | David Kerr | 2012-03-21 02:03:20 | Re: huge price database question.. |