From: | "Igor V(dot) Rafienko" <igorr(at)ifi(dot)uio(dot)no> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Postgres-7.0.2 optimization question |
Date: | 2000-10-13 12:05:18 |
Message-ID: | Pine.SOL.4.21.0010131345100.23627-100000@vigrid.ifi.uio.no |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I've got a slight optimization problem with postgres and I was hoping
someone could give me a clue as to what could be tweaked.
I have a couple of tables which contain little data (around 500,000 tuples
each), and most operations take insanely long time to complete. The
primary keys in both tables are ints (int8, iirc). When I perform a delete
(with a where clause on a part of a primary key), an strace shows that
postgres reads the entire table sequentially (lseek() and read()). Since
each table is around 200MB, things take time.
I tried vacuumdb --analyze. It did not help. I tried creating an index on
the part of the primary key that is used in the abovementioned delete. It
did not help either.
Has anyone encountered the same kind of problems before? In that case, has
anyone found a solution? (the problem is that the DB can very fast get 20
times larger (i.e. 10,000,000 tuples per table is a moderate size), and
I'd rather not witness a delete that takes around 90 minutes (100,000
tuples were deleted) more than once).
TIA,
ivr
--
Women wearing Wonder bras and low-cut blouses lose their right to
complain about having their boobs stared at.
"Things men wish women knew"
From | Date | Subject | |
---|---|---|---|
Next Message | Adam Lang | 2000-10-13 12:57:18 | Re: config |
Previous Message | Tamsin | 2000-10-13 11:00:32 | Not null contraints |