From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Thomas G(dot) Lockhart" <lockhart(at)alumni(dot)caltech(dot)edu> |
Cc: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: [HACKERS] Parser bug (?) |
Date: | 1998-12-06 07:58:46 |
Message-ID: | 20897.912931126@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Thomas G. Lockhart" <lockhart(at)alumni(dot)caltech(dot)edu> writes:
> All points are well taken, but I will object to the "700%" number. Still
> misleading imho.
OK, you understated for effect, I overstated for effect, we're even ;-)
> The SQL92 standard sez that char is shorthand for char(1).
Um. Well, what do you think of calling it "char1" or some such?
> I only left char in the mix because it is used
> internally in Postgres system tables and I didn't want to open that can
> of worms (and don't intend to, so don't panic).
Now that you mention it, I've noticed a few places where plain char is
used in the system tables in *exactly* the way I'm talking about, ie,
as a simple form of enumerated type. For example, the typtype and
typalign fields in pg_type.
All I'm asking for is a reliable way to get at that same functionality
in a user table.
> Another detail: we would need to figure out how to do locale and
> multibyte support for this "char" type to allow equivalence with
> multibyte char(1). Not sure how to do that since this type *is* used
> internally and probably can't be resized like that.
Urgh. Multibyte char support is the *last* thing I'm looking for
in this context. How about we rename the type
"justoneplainvanillaasciichar" and have done with it ;-) ?
[ caution, topic drift ahead ]
> Speaking of which, are you or Bruce (or anyone else) thinking of testing
> the unsigned int vs size_t for the Size typedef recently mentioned by
> Oliver? Can we count on all platforms to have size_t defined?
I was looking at that. size_t exists on all platforms, the trouble
is that pre-ANSI platforms are not very consistent about exactly
which system header file(s) define it. If we make c.h (and c.h, not
postgres.h, is what defines Size) depend on size_t then we may find
a few compilation failures due to .c files not pulling in all the
right system headers before including c.h.
On the other hand, c.h unconditionally requires <stdlib.h> which
itself is an ANSI-ism. And stdlib is one of the headers that ANSI
specifies to define size_t --- so any ANSI-compliant header fileset
*will* support this change. It's only not-quite-ANSI systems that
we risk problems with here. So probably I'm being too conservative
to worry at all. If you don't have a reasonably ANSI-conformant
compiler and header fileset you're gonna have a heck of an
unpleasant time building Postgres anyway, I suspect.
c.h says that Size is intended to represent the result type of
sizeof, and that most certainly is size_t, *not* any other type,
according to the ANSI spec. So if Size is being used in the
code to represent the size of in-memory objects then it definitely
ought to be size_t.
In theory this change is absolutely correct, and my guess is we
should do it. But if we see a few glitches on obsolete platforms,
don't say I didn't warn you ;-).
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 1998-12-06 16:12:07 | Re: [HACKERS] Parser bug (?)t |
Previous Message | Thomas G. Lockhart | 1998-12-06 07:06:04 | Re: [HACKERS] gram.y |