From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | 9sch1(at)txl(dot)com, pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: ascii() picks up sign bit past CHAR value 127 |
Date: | 2001-01-19 07:08:34 |
Message-ID: | 7574.979888114@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
pgsql-bugs(at)postgresql(dot)org writes:
> The lack of an UNISIGNED INT1 attribute type forces those of us who
> need a positive numeric byte type to use CHAR. The ascii() function
> ostensibly returns the numeric ASCII value of the corresponding CHAR
> attribute value - but once you get beyond the 0-127 ACCII character
> value range, the ascii() function starts picking up the active high
> order bit as a sign bit. This is not too surprising but it is a bit
> bizarre since I tend to think of character encoding standards having
> the option of using the 127-255 character values.
If you use gcc, you could probably recompile the backend with
-funsigned-char to make ascii() work the way you want.
On a machine where char is considered signed, I'm not sure that
ascii()'s behavior is wrong ... could argue that either way I suppose.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | pgsql-bugs | 2001-01-19 12:42:15 | Not using indexes in WHERE clauses |
Previous Message | pgsql-bugs | 2001-01-19 06:36:43 | ascii() picks up sign bit past CHAR value 127 |