From: | Einar Karttunen <ekarttun(at)cs(dot)Helsinki(dot)FI> |
---|---|
To: | Orion <o2(at)trustcommerce(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: What's the fastest way to do this? |
Date: | 2001-11-09 06:38:59 |
Message-ID: | 20011109083859.A6858@cs.helsinki.fi |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Nov 08, 2001 at 11:58:49AM -0800, Orion wrote:
>
> I have several really big tables that have columns uniquely identified by
> single or multiple rows. [ I have about 25 tables, 10k to 500k rows
> per table ]
>
> Each day I get a flat file of updates. I have no way of knowing which
> lines in the file are new records and which are updates for existing
> records.
>
> I need a way to insert the new ones and update the old ones. I have
> a couple of ideas but none of them seem fast enough ( I will soon
> be getting updates faster than I can feed them into the database ).
>
Hello
I was facing a similar problem some time ago. My solution was to create
a temp table and COPY the new data to it. After that I deleted all records
in the original table which existed in the temporary table. Then I just
did a insert from a select * from the temp table. Of course with this
approach you have to lock the tables.
- Einar Karttunen
From | Date | Subject | |
---|---|---|---|
Next Message | Tielman J de Villiers | 2001-11-09 06:51:11 | Re: PSQL anc compile errors |
Previous Message | Jean-Michel POURE | 2001-11-09 06:33:23 | Re: How to optimize a column type change??? |