From: | "Peter Watling" <watling(at)pobox(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Problem with table slowing down - Help with EXPLAIN reqd |
Date: | 2006-07-27 04:00:25 |
Message-ID: | 443a746f0607262100j5c5a1883r7b3a4d12e5cea838@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I have a table with only 30 odd records... I use one field on each
record as a sort of status, as a means of handshaking between a number
of clients... It works OK in theory.. however, over time ( just days )
it gets progressively slower.. its as if postgreSQL is keep a list of
all updates... I tried restarting postgres incase it was some
transaction thing, but it doesn seem to help
here is the 'explain' results.. I just made the pwdelete_temp table by
doing a create pwdelete_temp as select * from dataprocessors.. so that
new file runs flat out...
I have also tried doing a vacuum full analyse and reindex with no
change in performance.. I dump to a text file and reload works, but
that is a bit tooo savage for something to have to do frequently.
What what I can see, it looks like pg THINKS tere is 284000 records to
scan through.. How can I tell it to flush out the history of changes?
Any help gratfully received.
Peter Watling
New Zealand
transMET-MGU=# explain select * from pwdelete_temppaths;
QUERY PLAN
-----------------------------------------------------------------------
Seq Scan on pwdelete_temppaths (cost=0.00..11.40 rows=140 width=515)
(1 row)
transMET-MGU=# explain select * from dataprocessor_path;
QUERY PLAN
---------------------------------------------------------------------------
Seq Scan on dataprocessor_path (cost=0.00..6900.17 rows=284617 width=92)
(1 row)
From | Date | Subject | |
---|---|---|---|
Next Message | Q | 2006-07-27 04:49:58 | Re: Problem with table slowing down - Help with EXPLAIN reqd |
Previous Message | Alvaro Herrera | 2006-07-27 02:07:33 | Re: Performance Postgresql en HP-UX 11.x |