| From: | "Jason L(dot) Buberel" <jason(at)buberel(dot)org> |
|---|---|
| To: | depesz(at)depesz(dot)com |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: Alternative to drop index, load data, recreate index? |
| Date: | 2007-09-13 16:43:39 |
| Message-ID: | 46E968BB.8070800@buberel.org |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Depesz,
Thank you for the suggestion- I thought I had read up on that tool
earlier but had somehow managed to forget about it when starting this
phase of my investigation.
Needless to say, I can confirm the claims made on the project homepage
when using very large data sets.
- Loading 1.2M records into an indexed table:
- pg_bulkload: 5m 29s
- copy to: 53m 20s
These results were obtained using pg-8.2.4 with pg_bulkload-2.2.0.
-jason
hubert depesz lubaczewski wrote:
> On Mon, Sep 10, 2007 at 05:06:35PM -0700, Jason L. Buberel wrote:
>
>> I am considering moving to date-based partitioned tables (each table =
>> one month-year of data, for example). Before I go that far - is there
>> any other tricks I can or should be using to speed up my bulk data loading?
>>
>
> did you try pgbulkload? (http://pgbulkload.projects.postgresql.org/)
>
> depesz
>
>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Marco Colombo | 2007-09-13 17:14:39 | Re: Cannot declare record members NOT NULL |
| Previous Message | volunteer | 2007-09-13 16:31:17 | Re: query help |