From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Thomas Lockhart <lockhart(at)fourpalms(dot)org> |
Cc: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Unhappiness with forced precision conversion for timestamp |
Date: | 2001-10-04 18:03:13 |
Message-ID: | 29458.1002218593@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
It seems to me that when there is no explicit precision notation
attached, a time/timestamp datatype should not force a precision of
zero, but should accept whatever it's given. This is analogous to
the way we do char, varchar, and numeric: there's no length limit
if you don't specify one. For example, I think this result is quite
unintuitive:
regression=# select '2001-10-04 13:52:42.845985-04'::timestamp;
timestamptz
------------------------
2001-10-04 13:52:43-04
(1 row)
Throwing away the clearly stated precision of the literal doesn't
seem like the right behavior to me.
The code asserts that SQL99 requires the default precision to be zero,
but I do not agree with that reading. What I find is in 6.1:
30) If <time precision> is not specified, then 0 (zero) is implicit.
If <timestamp precision> is not specified, then 6 is implicit.
so at the very least you'd need two different settings for TIME and
TIMESTAMP. But we don't enforce the spec's idea of default precision
for char, varchar, or numeric, so why start doing so with timestamp?
Essentially, what I want is for gram.y to set typmod to -1 when it
doesn't see a "(N)" decoration on TIME/TIMESTAMP. I think everything
works correctly after that.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2001-10-04 18:06:38 | Re: pgsql/src/backend parser/parse_coerce.c utils/ ... |
Previous Message | Thomas Lockhart | 2001-10-04 18:00:22 | Re: Timestamp, fractional seconds problem |