From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | josh(at)agliodbs(dot)com |
Cc: | PG Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: regex cache |
Date: | 2008-06-18 05:27:16 |
Message-ID: | 21213.1213766836@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Josh Berkus <josh(at)agliodbs(dot)com> writes:
> I'm doing some analysis of PostgreSQL site traffic, and am being frequently
> hung up by the compile-time-fixed size of our regex cache (32 regexes, per
> MAX_CACHED_RES). Is there a reason why it would be hard to use work_mem
> or some other dynamically changeable limit for regex caching?
Hmmm ... Spencer's regex library makes a point of hiding its internal
representation of a compiled regex from the calling code. So measuring
the size of the regex cache in bytes would involve doing a lot of
violence to that API. We could certainly allow the size of the cache
measured in number-of-regexes to be controlled, though.
Having said that, I'm not sure it'd help your problem. If your query is
using more than 32 regexes concurrently, it likely is using $BIGNUM
regexes concurrently. How do we fix that?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2008-06-18 07:15:47 | Re: Cleaning up cross-type arithmetic operators |
Previous Message | Tom Dunstan | 2008-06-18 02:01:33 | Re: Crash in pgCrypto? |