From: | Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Automatic code conversion between UNICODE and other encodings |
Date: | 2000-10-12 08:11:41 |
Message-ID: | 20001012171141V.t-ishii@sra.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
I have committed the first implementation of an automatic code
conversion between UNICODE and other encodings. Currently
ISO8859-[1-5] and EUC_JP are supported. Supports for other encodings
coming soon. Testings of ISO8859 are welcome, since I have almost no
knowledge about European languages and have no idea how to test with
them.
How to use:
1. configure and install PostgreSQL with --enable-multibyte option
2. create database with UNICODE encoding
$ createdb -E UNICODE unicode
3. create a table and fill it with UNICODE (UTF-8) data. You could
create a table with even each column having different language.
create table t1(latin1 text, latin2 text);
4. set your terminal setting to (for example) ISO8859-2 or whatever
5. start psql
6. set client encoding to ISO8859-2
\encoding LATIN2
7. extract ISO8859-2 data from the UNICODE encoded table
select latin2 from t1;
P.S. I have used bsearch() to search code spaces. Is bsearch() is
portable enough?
--
Tatsuo Ishii
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Mount | 2000-10-12 08:28:58 | Re: JDBC Large ResultSet problem + BadTimeStamp Patch |
Previous Message | Peter Mount | 2000-10-12 08:05:50 | Re: JDBC Large ResultSet problem + BadTimeStamp Patch |