From: | Thomas Lockhart <lockhart(at)alumni(dot)caltech(dot)edu> |
---|---|
To: | Tim Perdue <tperdue(at)valinux(dot)com> |
Cc: | pgsql-hackers(at)hub(dot)org |
Subject: | Re: Eternal vacuuming.... |
Date: | 2000-05-11 16:17:42 |
Message-ID: | 391ADD26.B7522589@alumni.caltech.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> In 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -
> 1,000,000) then hit vacuum, the vacuum will run literally forever.
> ...before I finally killed the vacuum process, manually removed the
> pg_vlock, dropped the indexes, then vacuumed again, and re-indexed.
> Will this be fixed?
Patches? ;)
Just thinking here: could we add an option to vacuum so that it would
drop and recreate indices "automatically"? We already have the ability
to chain multiple internal commands together, so that would just
require snarfing the names and properties of indices in the parser
backend and then doing the drops and creates on the fly.
A real problem with this is that those commands are currently not
rollback-able, so if something quits in the middle (or someone kills
the vacuum process; I've heard of this happening ;) then you are left
without indices in sort of a hidden way.
Not sure what the prospects are of making these DDL statements
transactionally secure though I know we've had some discussions of
this on -hackers.
- Thomas
--
Thomas Lockhart lockhart(at)alumni(dot)caltech(dot)edu
South Pasadena, California
From | Date | Subject | |
---|---|---|---|
Next Message | The Hermit Hacker | 2000-05-11 16:27:32 | Re: Some CVS stuff, A 7.0-stable branch? and a mailing list? |
Previous Message | Michael Robinson | 2000-05-11 15:48:20 | Re: Multibyte still broken |