From: | "Andreas Pronakis" <apronakis(at)bizonline(dot)net> |
---|---|
To: | <pgsql-admin(at)postgresql(dot)org> |
Subject: | PostgreSQL 7.0.3 vacuum and insert problems |
Date: | 2001-02-01 12:54:31 |
Message-ID: | 000c01c08c4e$20b2a450$7200000a@andreas |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hi,
We are an ISP running our radius on postgres version 7.0.2 on an twin Intel
550 processor
with Solaris 7 and 600Mb RAM :
1. We have a database with 8 tables one of these tables (TABLE1) gets
updated every second or so as a result the table keeps growing and growing
and growing... to about 5 times it's original size within 24 hours
(9MB->45MB). A second table (TABLE2) gets about 80000 new records (inserts)
every day, at the end of the day those records get moved to a different
table and deleted from the original, that leaves 80000 deleted rows in
the table. As PostgreSQL does not reuse out of date records (updated
records)
we are forced to run vacuumdb every day in order to clean up the unused
records, however as the system is permanently in use there isn't really a
'good' time to do so as a result we are forced to stop the service and run
vacuumdb. What we have noticed is that if we export all the data in TABLE2,
drop it, and recreate it and then run vacuumdb on the database it takes
about
20 mins to finish where if we run it on the whole database without dropping
TABLE2 it takes about 1hour!!! Also the duration of vacuum increases by the
day if we do not drop the table i.e. if we run vacuumdb on day 1 and it
takes 1hr vacuumdb on day 2 takes 1.5hrs day3 3hrs and so on.
Does anyone have a solution or explanation to this problem?
We have been running some tests on version 7.0.3 to see how it performs with
vacuum. We have identified the following problems with it:
1. We have a perl script that converts data and creates a text file with
thousands of INSERT statements then executes: "psql -n -e -q -f
/DATA/DATA.txt DBNAME". Occasionally the script exits with the
error message:
"psql /DATA/DATA.txt: Notice from PostgreSQL backend
The Postmaster has informed me that some other backend died abnormally and
possibly corrupted shared memory.
I have rolled back the current transaction and am going to terminate your
database system connection and exit
Please reconnect to the database system and repeat your query.
pgsql: /DATA/DATA.txt pqReadData()..backend closed the channel unexpectedly"
And then there are times when it runs all the way through fine. Any ideas
what might be causing this error?
2. We have also tested the database by doing simultaneous updates and
running
vacuumdb every few thousand updates, although initially the updates were
hold for about one
second later on they would stop incrementally longer.
We have tried the same data and script on
PostgreSQL 7.0.3 running on RedHat Linux 6.2 with an Intel PIII and Solaris
8 on an Intel architecture. Solaris is not having this problem as
frequently.
From | Date | Subject | |
---|---|---|---|
Next Message | Kwan Lai Sum | 2001-02-02 03:24:13 | syslog.conf |
Previous Message | Michael | 2001-02-01 07:05:46 | How to install SQL 7.0 |