From: | Bruno Wolff III <bruno(at)wolff(dot)to> |
---|---|
To: | Dann Corbit <DCorbit(at)connx(dot)com> |
Cc: | Philipp Buehler <pb-pgsql-g(at)mlsub(dot)buehler(dot)net>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: 7.3.4 on Linux: UPDATE .. foo=foo+1 degrades massivly over time |
Date: | 2004-04-22 01:45:43 |
Message-ID: | 20040422014543.GA2160@wolff.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Apr 21, 2004 at 14:55:51 -0700,
Dann Corbit <DCorbit(at)connx(dot)com> wrote:
>
> Shouldn't the Database server be the entity that decides when vacuum is
> needed?
At least in simple cases it should. That is what the auto vacuum project
is trying to do.
> Also, I should be able to do an update on every row in a database table
> without causing severe problems. Every other database system I know of
> does not have this problem.
You can do this in postgres without causing too much trouble.
The problem at the beginning of this thread was caused by updating
a one row table thousands of times which can cause problems if
you don't vacuum.
> If I have a million row table with a column called is_current, and I do
> this:
> UPDATE tname SET is_current = 0;
> Horrible things happen.
Like what? At worst you will double the disk space used by this table.
That isn't great, but it surely isn't horrible under normal circumstances.
> Just an idea:
> Why not recognize that more rows will be modified than the row setting
> can support and actually break the command into batches internally?
This doesn't make sense. There is no limit on the number of rows that
can be modified at once.
From | Date | Subject | |
---|---|---|---|
Next Message | Christopher Browne | 2004-04-22 02:08:07 | Re: Replication |
Previous Message | Bruce Momjian | 2004-04-22 01:37:09 | Re: [OT] Tom's/Marc's spam filters? |