| From: | Alex <alex(at)meerkatsoft(dot)com> | 
|---|---|
| To: | Sean Davis <sdavis2(at)mail(dot)nih(dot)gov> | 
| Cc: | pgsql-general(at)postgresql(dot)org | 
| Subject: | Re: Question on Insert / Update | 
| Date: | 2005-11-10 02:14:24 | 
| Message-ID: | 4372AD00.7010508@meerkatsoft.com | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-general | 
Will give that a try. thanks.
was actually interested if the 2nd approach is common practice or if 
there are some reasons not to do it that way.
Alex
Sean Davis wrote:
>On 11/9/05 9:45 AM, "Alex" <alex(at)meerkatsoft(dot)com> wrote:
>
>  
>
>>Hi,
>>have just a general question...
>>
>>I have a table of 10M records, unique key on 5 fields.
>>I need to update/insert 200k records in one go.
>>
>>I could do a select to check for existence and then either insert or update.
>>Or simply insert, check on the error code an update if required.
>>
>>The 2nd seems to be to logical choice, but will it actually be faster
>>and moreover is that the right way to do it?
>>    
>>
>
>Probably the fastest and most robust way to go about this if you have the
>records in the form of a tab-delimited file is to COPY or \copy (in psql)
>them into a separate loader table and then use SQL to manipulate the records
>(check for duplicates, etc) for final insertion into the table.
>
>Sean
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 6: explain analyze is your friend
>
>  
>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Alex | 2005-11-10 02:16:24 | TRUNCATE Question | 
| Previous Message | Peter Eisentraut | 2005-11-10 00:46:59 | Re: Best way to use indexes for partial match at beginning |