From: | Eugene Dzhurinsky <jdevelop(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Import large data set into a table and resolve duplicates? |
Date: | 2015-02-14 17:37:44 |
Message-ID: | 20150214173744.GA13063@devbox |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello!
I have a huge dictionary table with series data generated by a third-party
service. The table consists of 2 columns
- id : serial, primary key
- series : varchar, not null, indexed
From time to time I need to apply a "patch" to the dictionary, the patch file
consists of "series" data, one per line.
Now I need to import the patch into the database, and produce another file as
- if the passed "series" field exists in the database, then return ID:series
- otherwise insert a new row to the table and generate new ID and return ID:series
for each row in the source file.
So the new file will contain both ID and series data, separated by tab or
something.
While reading and writing the data is not a question (I simply described the
whole task just in case), I wonder what is the most efficient way of importing
such a data into a table, keeping in mind that
- the dictionary table already consists of ~200K records
- the patch could be ~1-50K of records long
Thanks!
--
Eugene N Dzhurinsky
From | Date | Subject | |
---|---|---|---|
Next Message | Shanker Singh | 2015-02-14 17:41:24 | Re: parallel dump fails to dump large tables |
Previous Message | Adrian Klaver | 2015-02-14 17:32:33 | Re: postgres cust types |