When loading very large data exports (> 1 million records) I have found
it necessary to use the following sequence to achieve even reasonable
import performance:
1. Drop all indices on the recipient table
2. Use "copy recipient_table from '/tmp/input.file';"
3. Recreate all indices on the recipient table
However, I now have tables so large that even the 'recreate all indices'
step is taking too long (15-20 minutes on 8.2.4).
I am considering moving to date-based partitioned tables (each table =
one month-year of data, for example). Before I go that far - is there
any other tricks I can or should be using to speed up my bulk data loading?
Thanks,
jason