From: | Thomas Lockhart <lockhart(at)alumni(dot)caltech(dot)edu> |
---|---|
To: | Leon <leon(at)udmnet(dot)ru> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: [HACKERS] Status report: long-query-string changes |
Date: | 1999-09-13 03:45:28 |
Message-ID: | 37DC7358.AE1E6DAB@alumni.caltech.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Thomas Lockhart should speak up...
> He knows he'll never have to answer for any of his theories actually
> being put to test. If they were, they would be contaminated by reality.
You talkin' to me?? ;)
So, while you are on the lexer warpath, I'd be really happy if someone
would fix the following behavior:
(I'm doing this from memory, but afaik it is close to correct)
For non-psql applications, such as tcl or ecpg, which do not do any
pre-processing on input tokens, a trailing un-terminated string will
be lost, and no error will be detected. For example,
select * from t1 'abc
sent directly to the server will not fail as it should with that
garbage at the end. The lexer is in a non-standard mode after all
tokens are processed, and the accumulated string "abc" is left in a
buffer and not sent to yacc/bison. I think you can see this behavior
just by looking at the lexer code.
A simple fix would be to check the current size after lexing of that
accumulated string buffer, and if it is non-zero then elog(ERROR) a
complaint. Perhaps a more general fix would be to ensure that you are
never in an exclusive state after all tokens are processed, but I'm
not sure how to do that.
- Thomas
--
Thomas Lockhart lockhart(at)alumni(dot)caltech(dot)edu
South Pasadena, California
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Lockhart | 1999-09-13 04:05:28 | Re: [HACKERS] Fixing Simms' vacuum problems |
Previous Message | Thomas Lockhart | 1999-09-13 03:33:08 | Re: [HACKERS] Status report: long-query-string changes |