From: | Tim Perdue <tperdue(at)valinux(dot)com> |
---|---|
To: | Thomas Lockhart <lockhart(at)alumni(dot)caltech(dot)edu> |
Cc: | pgsql-hackers(at)hub(dot)org |
Subject: | Re: Eternal vacuuming.... |
Date: | 2000-05-11 15:33:31 |
Message-ID: | 391AD2CB.C0645504@valinux.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Thomas Lockhart wrote:
>
> > In 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -
> > 1,000,000) then hit vacuum, the vacuum will run literally forever.
> > ...before I finally killed the vacuum process, manually removed the
> > pg_vlock, dropped the indexes, then vacuumed again, and re-indexed.
> > Will this be fixed?
>
> Patches? ;)
Hehehe - I say the same thing when someone complains about SourceForge.
Now you know I'm a huge postgres hugger - but PHP is my strength and you
would not like any C patches I'd submit anyway.
> Just thinking here: could we add an option to vacuum so that it would
> drop and recreate indices "automatically"? We already have the ability
> to chain multiple internal commands together, so that would just
> require snarfing the names and properties of indices in the parser
> backend and then doing the drops and creates on the fly.
This seems like a hack to me personally. Can someone figure out why the
vacuum runs forever and fix it? Probably a logic flaw somewhere?
Tim
--
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems
408-542-5723
From | Date | Subject | |
---|---|---|---|
Next Message | Tatsuo Ishii | 2000-05-11 15:44:05 | Re: Multibyte still broken |
Previous Message | Bruce Momjian | 2000-05-11 15:32:00 | Re: setproctitle() no longer used? |