From: | "Christopher Kings-Lynne" <chriskl(at)familyhealth(dot)com(dot)au> |
---|---|
To: | "Curt Sampson" <cjs(at)cynic(dot)net> |
Cc: | <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Importing Large Amounts of Data |
Date: | 2002-04-15 08:24:36 |
Message-ID: | GNELIHDDFBOCMGBFGEFOAECECCAA.chriskl@familyhealth.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> b) In fact, at times I don't need that data integrity. I'm
> prefectly
> happy to risk the loss of a table during import, if it
> lets me do the
> import more quickly, especially if I'm taking the database off line
> to do the import anyway. MS SQL server in fact allows me to specify
> relaxed integrity (with attendant risks) when doing a BULK
> IMPORT; it
> would be cool if Postgres allowed that to.
Well I guess a TODO item would be to allow COPY to use relaxed constraints.
Don't know how this would go over with the core developers tho.
> Thanks. This is the kind of useful information I'm looking for. I
> was doing a vacuum after, rather than before, generating the indices.
That's because the indexes themselves are cleaned out with vacuum, as well
as the tables.
Chris
From | Date | Subject | |
---|---|---|---|
Next Message | Denis Perchine | 2002-04-15 08:59:15 | Re: Importing Large Amounts of Data |
Previous Message | Curt Sampson | 2002-04-15 08:19:20 | Re: Importing Large Amounts of Data |