From: | Pierre-Frédéric Caillaud <lists(at)boutiquenumerique(dot)com> |
---|---|
To: | "Postgres General" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Date format for bulk copy |
Date: | 2004-10-13 18:36:50 |
Message-ID: | opsftnbohocq72hf@musicbox |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>> Right, I *can* do this. But then I have to build knowledge into that
>> script so it can find each of these date fields (there's like 20 of them
>> across 10 different files) and then update that knowledge each time it
>> changes.
>
> In your case that's a reasonable argument against filtering the
> data with a script. Using a regular expression in the script might
> reduce or eliminate the need for some of the logic, but then you'd
> run the risk of reformatting data that shouldn't have been touched.
Yes, but :
You can have your script make a query in the database to fetch the data
types of the fields and then know which ones are to be transformed and
how. The script would take as arguments a dump file and a
database,schema.table, would read the file and pipe the transformed data
into a psql with a COPY FROM stdin command... could save you a lot of work
no ?
A bonus is that your script can complain if it detects incompatibilities,
and be more fool-proof. Plu
From | Date | Subject | |
---|---|---|---|
Next Message | David Rysdam | 2004-10-13 19:37:14 | Re: Date format for bulk copy |
Previous Message | Robert Treat | 2004-10-13 18:25:30 | Re: [pgsql-advocacy] [GENERAL] Reusable pl/pgsql |