From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | vernonw(at)gatewaytech(dot)com |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: query speed joining tables |
Date: | 2003-01-14 19:38:43 |
Message-ID: | 200301141138.43029.josh@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Vernon,
> What I stated is my observation on my project with over twenty multivalued
detail tables. I have a selection query
> contained 200 characters, involving 10 tables, and using subquery. The
performance is not bad after properly indexing,
> least than 3 second (what the planner says). I will have longer queries
later and hope they won't have any performance
> problem.
One trick for you is to create a custom aggregate for string contination for
each detail table, and that will allow you to list the values in the detail
table as if they were a continuous text string. A concat aggregate is even
fast on PostgreSQL.
CREATE FUNCTION "comma_cat" (text,text) RETURNS text AS '
SELECT CASE WHEN $2 IS NULL OR $2 = '''' THEN $1
WHEN $1 IS NULL OR $1 = '''' THEN $2
ELSE $1 || '', '' || $2
END
' LANGUAGE 'sql';
CREATE FUNCTION "br_cat" (text, text) RETURNS text AS '
SELECT CASE WHEN $2 IS NULL OR $2 = '''' THEN $1
WHEN $1 IS NULL OR $1 = '''' THEN $2
ELSE $1 || ''<br>'' || $2
END
' LANGUAGE 'sql';
--create aggregate with html <breaks> between items
CREATE AGGREGATE br_list ( BASETYPE = text, SFUNC = br_cat, STYPE = text,
INITCOND = '' );
--create aggregate with commas between items
CREATE AGGREGATE comma_list ( BASETYPE = text, SFUNC = comma_cat, STYPE =
text,
INITCOND = '' );
-Josh Berkus
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2003-01-14 19:46:50 | Re: RFC: A brief guide to nulls |
Previous Message | Andrew Sullivan | 2003-01-14 16:34:47 | Re: Select * from users WHERE upper(lastName) = upper('Pringle') |