From: | Hannu Krosing <hannu(at)tm(dot)ee> |
---|---|
To: | Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> |
Cc: | lockhart(at)fourpalms(dot)org, peter_e(at)gmx(dot)net, tgl(at)sss(dot)pgh(dot)pa(dot)us, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Proposal: CREATE CONVERSION |
Date: | 2002-07-09 15:50:07 |
Message-ID: | 1026229807.7042.5.camel@taru.tm.ee |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, 2002-07-09 at 03:47, Tatsuo Ishii wrote:
> > An aside: I was thinking about this some, from the PoV of using our
> > existing type system to handle this (as you might remember, this is an
> > inclination I've had for quite a while). I think that most things line
> > up fairly well to allow this (and having transaction-enabled features
> > may require it), but do notice that the SQL feature of allowing a
> > different character set for every column *name* does not map
> > particularly well to our underlying structures.
>
> I've been think this for a while too. What about collation? If we add
> new chaset A and B, and each has 10 collations then we are going to
> have 20 new types? That seems overkill to me.
Can't we do all collating in unicode and convert charsets A and B to and
from it ?
I would even reccommend going a step further and storing all 'national'
character sets in unicode.
--------------
Hannu
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2002-07-09 16:28:44 | Re: I am being interviewed by OReilly |
Previous Message | Oliver Elphick | 2002-07-09 15:49:47 | Re: (A) native Windows port |