From: | "Josh Berkus" <josh(at)agliodbs(dot)com> |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | Performance problems - Indexes and VACUUM |
Date: | 2001-10-17 03:29:50 |
Message-ID: | web-149383@davinci.ethosmedia.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Tom, Folks:
I am having a rather interesting time getting performance out of my
database. I'd really appreciate some feedback from the list on this.
As you may recall, I've gotten around Postgres' lack of rowset-returning
stored procedures by constructing "pointer tables" which simply hold
lists of primary keys related to the user's current search. This is an
excellent approach for a browser-based application, and I have since
used this idea on other databases, even one that supports stored
procedures.
However, this means that I clear all of these pointer tables on a
periodic basis (how frequently depends on usage). Just clearing the
records didn't work, because of the Postgres "padded index" problem
where eventually the indexes on these tables becomes full of deleted
rows. Which gives me problem 1:
1. INDEXES: I discovered, the hard way, a peculiar problem. If you drop
and re-create a table within the same transaction (in a function, for
example) the indexes do not get dropped completely. Doing this to
several tables, I had the disturbing experience of seeing incorrect rows
in response to some queries. Specifically dropping each of the indexes,
dropping the tables, re-creating the tables, and re-creating the indexes
seems to work. However, this seems to me to indicate a potential
problem with DDL commands within transactions.
The second problem is giving me severe grief right now:
2. I have a very complex view designed for browsing client information.
This view involves 2 other views, and two custom aggregates which are
based on sub-queries (could only do it in Postgres!). The query plan is
as long as this e-mail, but thanks to optimization and good indexing it
runs in about 2 seconds right after a VACUUM.
Unfortunately, 6 hours after a VACUUM, the query bogs down. The query
plan does not seem to have changed much, but somehow what took 50% of
the processor for 2 seconds at 8:30AM flattens the processor for a full
45 seconds at 3:30 pm.
Once VACUUM can be run in the background, I suppose that this can be
dealt with, but until then does anyone have any suggestions?
-Josh Berkus
______AGLIO DATABASE SOLUTIONS___________________________
Josh Berkus
Complete information technology josh(at)agliodbs(dot)com
and data management solutions (415) 565-7293
for law firms, small businesses fax 621-2533
and non-profit organizations. San Francisco
Attachment | Content-Type | Size |
---|---|---|
unknown_filename | text/plain | 2 bytes |
unknown_filename | text/plain | 2 bytes |
unknown_filename | text/plain | 2 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2001-10-17 03:56:44 | Re: Restricting access to Large objects |
Previous Message | Steven Dahlin | 2001-10-17 03:10:04 | nvl() function |