From: | Vernon Wu <vernonw(at)gatewaytech(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | How good I can get |
Date: | 2003-01-14 19:42:07 |
Message-ID: | 65TWVMK3ZVRRMF0DYTHF74KGKE08CA.3e24680f@kimiko |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
In my current project, among of over twenty main tables, three of them are the master tables and result are the
multivalued detail tables. All of those table have the field name userid which is varchar data type.
A selection statement, having a level of subquery, can involve up to twenty tables. After some query statement tuning
and indexing (on all of the multivalued field of all the detail table), a query performance improves spectacularly. The
following is the output of ?explain analyze? on a query involved ten tables.
NOTICE: QUERY PLAN:
Result (cost=0.00..23.42 rows=1 width=939) (actual time=28.00..28.00 rows=0 loops=1)
. . .
Total runtime: 31.00 msec
I would like to find any performance improvement potentiality in terms of DB table design (indexing always can be done
later). The first thing I know I can do is to change the join key, userid, to numeral. Since implementing the change
requests some work on the system, I would like to know how significant performance improement it can bring.
Almost all fields of those tables are a single digit character. I can guess that change them to number type also can
improve the selection performance. My question again is how much it can get. The application is a web application. All
data on a page is a string, or text type. All number type data has to be parsed from a string to the back-end, and
converted into a string from the back end. So that change will have performance overhead on the application.
Thanks for your input.
Vernon
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2003-01-14 20:36:15 | Re: complicated queries in pl/pgsql |
Previous Message | CaptainX0r | 2003-01-14 19:38:28 | Re: Sun vs. Mac |