| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> | 
|---|---|
| To: | Jonathan Daugherty <jdaugherty(at)commandprompt(dot)com> | 
| Cc: | pgsql-general(at)postgresql(dot)org | 
| Subject: | Re: Bulk data insertion | 
| Date: | 2004-11-26 23:46:54 | 
| Message-ID: | 20283.1101512814@sss.pgh.pa.us | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-general | 
Jonathan Daugherty <jdaugherty(at)commandprompt(dot)com> writes:
> The problem is that I don't want to spend a lot of time and memory 
> building such a query (in C).  I would like to know if there is a way to 
> take this huge chunk of data and get it into the database in a less 
> memory-intensive way.  I suppose I could use COPY to put the data into a 
> table with triggers that would do the checks on the data, but it seems 
> inelegant and I'd like to know if there's a better way.
Actually I'd say that is the elegant way.  SQL is fundamentally a
set-oriented (table-oriented) language, and forcing it to do things in
an array fashion is just misusing the tool.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Gary L.Burnore | 2004-11-26 23:49:03 | Re: comp.databases.postgresql.* groups and RFD | 
| Previous Message | Tom Lane | 2004-11-26 23:39:16 | Re: Regexp matching: bug or operator error? |