From: | "Bryan White" <bryan(at)arcamax(dot)com> |
---|---|
To: | "pgsql-general" <pgsql-general(at)postgreSQL(dot)org>, "Steve Wolfe" <steve(at)iboats(dot)com> |
Subject: | Re: how good is PostgreSQL |
Date: | 2000-10-31 21:49:10 |
Message-ID: | 025701c04384$66a1d5c0$2dd260d1@arcamax.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
> Whenever a query is executed (not found in cache, etc.), the caching
> system would simply store the query, the results, and a list of tables
> queried. When a new query came in, it would do a quick lookup in the
query
> hash to see if it already had the results. If so, whammo. Whenever an
> insert/delete/update was sensed, it would look at the tables being
affected,
> and the caching mechanism would clear out the entries depending on those
> tables.
It seems to me that tracking the list of cached queries and watching for
queries that might invalidate them adds a lot of complexity to the back end
and the front end still has to establish the connection and wait transfer
the data over the socket.
On a more practical level, a backend solution would require someone with
fairly detailed knowlege of the internals of the backend. A front end
solution can more likely to be implemented by someone not as knowlegable.
One of the big advantages of your technique is there is no code change at
the application level. This means less database lock-in. Maybe that is a
disadvantage too. ;-)
From | Date | Subject | |
---|---|---|---|
Next Message | Adam Lang | 2000-10-31 21:51:11 | Re: postgres on redhat 7.0 |
Previous Message | os390 ibmos | 2000-10-31 21:47:43 | Re: postgres on redhat 7.0 |
From | Date | Subject | |
---|---|---|---|
Next Message | Steve Wolfe | 2000-10-31 21:58:17 | Re: how good is PostgreSQL |
Previous Message | Steve Wolfe | 2000-10-31 21:42:01 | Query caching |