=?iso-8859-1?Q?C=E9lestin_HELLEU?= <celestin(dot)helleu(at)maporama(dot)com> writes:
> Well, with any database, if I had to insert 20 000 000 record in a table, I=
> wouldntt do it in one transaction, it makes very big intermediate file, an=
> d the commit at the end is really heavy.
There may be some databases where the above is correct thinking, but
Postgres isn't one of them. The time to do COMMIT, per se, is
independent of the number of rows inserted.
You need to find out where your bottleneck actually is, without any
preconceptions inherited from some other database.
regards, tom lane