From: | Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> |
---|---|
To: | tgl(at)sss(dot)pgh(dot)pa(dot)us |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: convert function |
Date: | 2001-08-15 14:31:54 |
Message-ID: | 20010815233154L.t-ishii@sra.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> writes:
> > I have added new function called "convert" similar to SQL99's convert.
> > Convert converts encoding according to parameters. For example, if you
> > have a table named "unicode" in an Unicode database,
>
> > SELECT convert(text_field, 'LATIN1') FROM unicode;
>
> > will return text in ISO-8859-1 representation.
>
> I don't understand how this works. If you have a multibyte-enabled
> backend, won't backend libpq try to convert all outgoing text to
> whatever PGCLIENTENCODING says? How can it know that one particular
> column of a result set is not in the regular encoding of this database,
> but something else? Seems like libpq is going to mess up the results
> by applying an inappropriate multibyte conversion.
If the encodings of frontend and backend are same, no conversion would
be applied by libpq.
--
Tatsuo Ishii
From | Date | Subject | |
---|---|---|---|
Next Message | Karel Zak | 2001-08-15 14:41:08 | Re: encoding names |
Previous Message | Tatsuo Ishii | 2001-08-15 14:28:35 | Re: encoding names |