From: | Bruno Wolff III <bruno(at)wolff(dot)to> |
---|---|
To: | Oren Mazor <oren(dot)mazor(at)gmail(dot)com> |
Cc: | pgsql-novice(at)postgresql(dot)org |
Subject: | Re: dead tuples |
Date: | 2005-07-22 17:40:12 |
Message-ID: | 20050722174012.GC29782@wolff.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Fri, Jul 22, 2005 at 13:31:50 -0400,
Oren Mazor <oren(dot)mazor(at)gmail(dot)com> wrote:
> what happens is that my database files grow significantly. say i have a
> table filled with people names, and i modify each one, then my database
> seems to double. this is because (afaik) pg marks the old ones as 'dead'
> but doesnt delete them. you run vacuum to reclaim it.
>
> which is what i do. but i'm wondering if there's any way to circumvent the
> entire process of marking them as 'dead' and just deleting things right
> off when they get updated
No because concurrent transactions still can see the old versions of the
tuples. The deletes need to be delayed until all tranasctions that started
before the updates were committed have committed or rolled back.
If you don't need MVCC for your application then you might consider using
other database systems such as perhaps SQL Lite that don't do that. The
downside is that you will need stronger locks when updating tuples which
may or may not be a problem for you.
There may be some tricks you can do to trade off disk space for performance
but generally you are better off just buying more disk space.
From | Date | Subject | |
---|---|---|---|
Next Message | Tim Goodaire | 2005-07-22 17:43:48 | Re: dead tuples |
Previous Message | Oren Mazor | 2005-07-22 17:31:50 | Re: dead tuples |