Hello,
I have a DB with about 30 tables, where 2 tables are significantly
larger than the rest, and contain a bit over 100,000 rows.
Every night I do these 3 things:
VACUUM;
ANALYZE;
pg_dump
I am noticing that the VACUUM part takes nearly 30 minutes, during
which the DB is not very accessible (and a whole lot of load is put on
the machine in general).
Using pgsession.sh script mentioned earlier, I caught this process
taking a long time:
31179 | mydb | otis | FETCH 100 FROM _pg_dump_cursor
Is there anything one can do to minimize the impact of VACUUM?
I am using PG 7.3.4 on a Linux box with a 1.70GHz Celeron, 1GB RAM, and
a 'regular' IDE disk.
Thanks,
Otis