From: | "Sean Davis" <sdavis2(at)mail(dot)nih(dot)gov> |
---|---|
To: | "Andrew Hammond" <ahammond(at)ca(dot)afilias(dot)info>, "Deepblues" <deepblues(at)gmail(dot)com> |
Cc: | <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: Import csv file into multiple tables in Postgres |
Date: | 2005-02-28 11:31:23 |
Message-ID: | 008401c51d89$0868fb30$1f6df345@WATSON |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
----- Original Message -----
From: "Andrew Hammond" <ahammond(at)ca(dot)afilias(dot)info>
To: "Deepblues" <deepblues(at)gmail(dot)com>
Cc: <pgsql-novice(at)postgresql(dot)org>
Sent: Sunday, February 27, 2005 9:28 PM
Subject: Re: [NOVICE] Import csv file into multiple tables in Postgres
> The brief answer is no, you can not import from a single csv file into
> multiple tables.
>
> If the csv file consists of two distinct sections of data, then you could
> of course split it into two csv files. If what you want to do is normalize
> existing data, then you should first import the existing data into a
> working table. Then you can manipulate it within the database.
>
> It is unlikely that you will need perl to do any of this.
I use perl a lot for stuff like this, but have found that in most cases, the
easiest thing to do is to load the data into a single postgresql table and
then create sql for doing the selects and inserts to then create the
multiple tables. This has the added advantage that you get to keep a copy
of the original data available in case you don't put every column into the
"working" database. If you end up doing this a lot, you can create a
separate "loader" schema that contains all of these raw csv tables in one
place, not visible by most users so as not to confuse the "working" schema.
Sean
From | Date | Subject | |
---|---|---|---|
Next Message | Sean Davis | 2005-02-28 14:08:10 | Re: Using upper() / decode() together |
Previous Message | Envbop | 2005-02-28 04:02:39 | Database Name |