From: | Paul Lambert <paul(dot)lambert(at)autoledgers(dot)com(dot)au> |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | Removing duplicate rows |
Date: | 2007-02-06 22:15:02 |
Message-ID: | 45C8FDE6.6050603@autoledgers.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
I have some tables which I need to load from flat data files that can
contain upwards of five million rows.
It's possible (owing to the structure of the system the data is
originating from) that on occasion the data extract that generates these
flat files can include two records that have the same values in the
fields that make up the primary key of the table in my Postgres
database, thus throwing an error when I attempt to load.
I'm assuming the best way to get around this is to load the data into a
temporary table with "copy from" and then do a "select distinct into" my
real table.
The question I have is the best way to set up this temporary table in my
reload script. (Having 14 servers running Postgres, each with 37
tables, I don't want to be creating temporary tables manually each time
I need to reload the databases)
As the table definitions get updated frequently, I'd like to put in my
script to create the table as a copy of the existing 'real' table. I'm
assuming "create table as" would be what I need to use, but what would I
use as the select sql - i.e. how do I use select to pull the definition
of a table.
Apologies if this is a stupid question, I'm still fairly new to Postgres
so I'm not sure what system tables are available for pulling out this
kind of information.
Thanks in advance for any assistance you can offer.
Regards,
Paul.
--
Paul Lambert
Database Administrator
AutoLedgers
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Sullivan | 2007-02-06 22:22:38 | Re: Removing duplicate rows |
Previous Message | Alvaro Herrera | 2007-02-06 14:07:42 | Re: Compilation Error AIX |