From: | "Aaron Bono" <postgresql(at)aranya(dot)com> |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | Fwd: refining view using temp tables |
Date: | 2006-11-06 22:51:50 |
Message-ID: | bf05e51c0611061451w521e6d4dxd83a55ddea9dfd52@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
---------- Forwarded message ----------
From: Aaron Bono <aaron(dot)bono(at)aranya(dot)com>
Date: Nov 6, 2006 4:51 PM
Subject: Re: [SQL] refining view using temp tables
To: BeemerBiker <jstateson(at)swri(dot)edu>
Cc: pgsql-sql(at)postgresql(dot)org
On 10/31/06, BeemerBiker <jstateson(at)swri(dot)edu> wrote:
>
>
> Using postgre with dotnet VS8. I came up with a scheme of having a web
> user
> search thru the database and selecting into a temp table. Then a further
> "refined" search would use the temp table as input and another temp table
> as
> output. Then swap the source and destination so as to not use up
> resources.
> Maybe this is not a good mechanism. I found right away I could not easily
> check for table existence nor drop non-existent tables without getting a
> server error (even with try {} catch {}).
>
> I may have the same user bring up two or more pages during the same
> session.
> Ditto for other users. I would not want to code up temporary names using
> timestamps and usersnames if I could avoid it. It would be nice if the
> sql
> engine would handle this for me. Can someone suggest how postgresql
> could
> handle a google like "search within results".
The best approach will probably vary depending on the volume of usage on
your site.
One thing I would try is to create a table for searches and search results
with a primary key equal to the user's session ID or log in ID. This would
only give them one search but if you need more you can use a serogate key.
Then have everyone use the same table - no temp tables needed.
Table:
user_search
user_search_id (PK),
session_id (indexed),
create_dt,
last_access_dt
user_search_param (one record for each search parameter they entered for the
filter)
user_search_param_id (PK),
user_search_id (FK),
param_name,
param_value
user_search_results (one record per record returned in search)
user_search_results_id (PK - this may not be necessary),
user_search_id (FK),
sort_index (to help preserve sort order),
values (this can be CSV, XML or even broken into another table)
Then you can add a process that regularly deletes searches who's last access
date is older than so many minutes (cascade delete that is). Or you can add
a trigger that fires off this clean up whenever a new search is added.
==================================================================
Aaron Bono
Aranya Software Technologies, Inc.
http://www.aranya.com
http://codeelixir.com
==================================================================
--
==================================================================
Aaron Bono
Aranya Software Technologies, Inc.
http://www.aranya.com
http://codeelixir.com
==================================================================
From | Date | Subject | |
---|---|---|---|
Next Message | Aaron Bono | 2006-11-06 23:01:06 | Re: show privileges |
Previous Message | Richard Broersma Jr | 2006-11-06 22:12:10 | Re: Requirement for PostgreSQL Database Developer |