From: | Madison Kelly <linux(at)alteeve(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Need suggestion high-level suggestion on how to solve a performance problem |
Date: | 2005-07-07 13:13:30 |
Message-ID: | 42CD2A7A.10603@alteeve.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi all,
I hope I am not asking too many questions. :)
I have been trying to solve a performance problem in my program for a
while now and, after getting an index to work which didn't speed things
up enough, I am stumped. I am hoping someone here might have come across
a similar issue and came up with a creative solution they wouldn't mind
sharing.
I am not looking for details, I expect to do my homework, I just need
a pointer, suggestion or trick.
The problem I have is that I am using pgSQL as a back end for my
web-based *nix backup program. Part of the database stores info on every
file and directory per partition. I use this information to build my
directory tree. I have a test partition with ~325,000 files of which
~30,000 are directories. I have been able to get the performance up to a
reasonable level for displaying the directory tree including expanding
and contracting branches (~3-5sec). I do this by loading all the
directory info into an array and a hash once and using them as needed
instead of hitting the DB.
The problem comes when the user toggles a directory branch's backup
flag (a simple check box beside the directory name). If it's a directory
near the end of a branch it is fast enough. If they toggle a single file
it is nearly instant. However if they toggle say the root directory, so
every file and directory below it needs to be updated, it can take
500-600sec to return. Obviously this is no good.
What I need is a scheme for being able to say, essentially:
UPDATE file_info_1 SET file_backup='t' WHERE file_parent_dir~'^/';
Faster. An index isn't enough because it needs to hit every entry anyway.
I use perl to access the DB and generate the web pages. The file
browser portion looks and acts like most file browsers (directory tree
in the left frame with expanding and contracting directory branches and
a list of files in a given directory on the right). It does not use any
plug-ins like Java and that is important to me that it stays that way (I
want it to be as simple as possible for the user to install).
So far the only suggestion I've received is to keep a secondary
'delta' table to store just the request. Then on load get the existing
data then check it against the delta table before creating the page. The
biggest draw back for me with this is that currently I don't need to
provide an 'Apply' button because a simple javascript call passes the
request onto the perl script immediately. I really like the Mac-esque
approach to keeping the UI as simple and straight forward as possible.
So, a suggestion that doesn't require something like an 'Apply' button
would be much appreciated.
Thanks for any suggestions in advance!
Madison
PS - For what it's worth, this is the last hurdle for me to overcome
before I can finally release my program as 'beta' after over 15 months
of work! :)
From | Date | Subject | |
---|---|---|---|
Next Message | Dawid Kuroczko | 2005-07-07 13:48:06 | Re: Surprizing performances for Postgres on Centrino |
Previous Message | PFC | 2005-07-07 13:03:22 | Re: |