From: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
---|---|
To: | Thomas Lockhart <lockhart(at)alumni(dot)caltech(dot)edu> |
Cc: | Tim Perdue <tperdue(at)valinux(dot)com>, pgsql-hackers(at)hub(dot)org |
Subject: | Re: Eternal vacuuming.... |
Date: | 2000-05-11 17:28:05 |
Message-ID: | 200005111728.NAA18363@candle.pha.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> > In 6.4.x and 6.5.x if you delete a large number of rows (say 100,000 -
> > 1,000,000) then hit vacuum, the vacuum will run literally forever.
> > ...before I finally killed the vacuum process, manually removed the
> > pg_vlock, dropped the indexes, then vacuumed again, and re-indexed.
> > Will this be fixed?
>
> Patches? ;)
>
> Just thinking here: could we add an option to vacuum so that it would
> drop and recreate indices "automatically"? We already have the ability
> to chain multiple internal commands together, so that would just
> require snarfing the names and properties of indices in the parser
> backend and then doing the drops and creates on the fly.
We could vacuum the heap table, and conditionally update or recreate the
index depending on how many tuple we needed to move during vacuum of the
heap.
--
Bruce Momjian | http://www.op.net/~candle
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
From | Date | Subject | |
---|---|---|---|
Next Message | Benjamin Adida | 2000-05-11 17:32:04 | Re: User's Lounge and Developer's Corner |
Previous Message | Marc G. Fournier | 2000-05-11 17:25:44 | Re: Eternal vacuuming.... |