From: | "Stuart Grimshaw" <stuart(at)stubbynet(dot)org(dot)uk> |
---|---|
To: | "Postgres-SQL" <pgsql-sql(at)postgresql(dot)org> |
Subject: | Adding many rows to a table. |
Date: | 2000-05-20 21:53:57 |
Message-ID: | 004101bfc2a5$e6a18280$0200000a@home.stubbynet.org.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
I was recently doing some testing into which of two methods of searching a
large block of text was faster. One of the ways I came up with was to
identify keywords in an article and then have a table to cross reference
these keywords back to the article they appear in.
article|keyword
---------------
520 |cheese
520 |bread
521 |pickle
522 |spam
you get the picture.
I wrote a perl program to extract these keywords, with the idea of inserting
them into a database. The first way I tried was to have lots on "insert
into"'s in 1 string, then execute that string. Worked OK until the string
contained too many keywords and the query was too large (this was under
6.5.3)
The docs suggest using "copy to" for inserting lots of data. I created a
string :
COPY TO keywords FROM STDIN USING DELIMITERS ',';
520,cheese
520,bread
521,pickle
522,spam
\.
and then executed it, and got the following error
ERROR: parser: parse error at or near "520"
So my question is, after all that, is that the right way to construct my
string (regardless of language, Perl C/C++ etc)
By the way, I've ditched this idea for now, and just use a common or garden
index on the feild I want to search for the keywords. I just want to be able
to actually do this, you never know when it might come in handy.
---------------------------------------
Stuart Grimshaw |
Schoolsnet LTD | Special
www.schoolsnet.com | Projects
| Developer
stuart(at)stubbynet(dot)org(dot)uk |
---------------------------------------
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2000-05-21 04:54:17 | Re: Adding many rows to a table. |
Previous Message | Peter Eisentraut | 2000-05-20 13:35:58 | Re: Question about databases in alternate locations... |