From: | The Hermit Hacker <scrappy(at)hub(dot)org> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Hiroshi Inoue <Inoue(at)tpf(dot)co(dot)jp>, Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, PostgreSQL Development <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | Re: ALTER TABLE DROP COLUMN |
Date: | 2000-10-08 00:11:55 |
Message-ID: | Pine.BSF.4.21.0010072107500.1627-100000@thelab.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 5 Oct 2000, Tom Lane wrote:
> "Hiroshi Inoue" <Inoue(at)tpf(dot)co(dot)jp> writes:
> > Seems some people expect the implementation in 7.1.
> > (recent [GENERAL} drop column?)
> > I could commit my local branch if people don't mind
> > backward incompatibility.
there have been several ideas thrown back and forth ... the best one that
I saw, forgetting who suggested it, had to do with the idea of locking the
table and doing an effective vacuum on that table with a 'row re-write'
happening ...
Basically, move the first 100 rows to the end of the table file, then take
100 and write it to position 0, 101 to position 1, etc ... that way, at
max, you are using ( tuple * 100 ) bytes of disk space, vs 2x the table
size ... either method is going to lock the file for a period of time, but
one is much more friendly as far as disk space is concerned *plus*, if RAM
is available for this, it might even be something that the backend could
use up to -S blocks of RAM to do it off disk? If I set -S to 64meg, and
the table is 24Meg in size, it could do it all in memory?
From | Date | Subject | |
---|---|---|---|
Next Message | The Hermit Hacker | 2000-10-08 00:13:47 | Re: Re: [ANNOUNCE] Announce: Release of PyGreSQL version 3.0 |
Previous Message | The Hermit Hacker | 2000-10-08 00:07:01 | Re: ALTER TABLE DROP COLUMN |