From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Peter Eisentraut <peter_e(at)gmx(dot)net> |
Cc: | Ashley Cambrell <ash(at)freaky-namuh(dot)com>, Neil Conway <nconway(at)klamath(dot)dyndns(dot)org>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Scanner performance (was Re: 7.3 schedule) |
Date: | 2002-04-13 06:21:52 |
Message-ID: | 13686.1018678912@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Peter Eisentraut <peter_e(at)gmx(dot)net> writes:
> My profiles show that the work spent in the scanner is really minuscule
> compared to everything else.
Under ordinary circumstances I think that's true ...
> (The profile data is from a run of all the regression test files in order
> in one session.)
The regression tests contain no very-long literals. The results I was
referring to concerned cases with string (BLOB) literals in the
hundreds-of-K range; it seems that the per-character loop in the flex
lexer starts to look like a bottleneck when you have tokens that much
larger than the rest of the query.
Solutions seem to be either (a) make that loop quicker, or (b) find a
way to avoid passing BLOBs through the lexer. I was merely suggesting
that (a) should be investigated before we invest the work implied
by (b).
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Christopher Kings-Lynne | 2002-04-13 06:31:35 | Re: numeric/decimal docs bug? |
Previous Message | Christopher Kings-Lynne | 2002-04-13 06:21:50 | Re: 7.3 schedule |