From: | Andreas Kretschmer <akretschmer(at)spamfence(dot)net> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: huge table occupation after updates |
Date: | 2016-12-10 12:36:36 |
Message-ID: | 20161210123636.GA10052@tux |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom DalPozzo <t(dot)dalpozzo(at)gmail(dot)com> wrote:
> Hi,
> I've a table ('stato') with an indexed bigint ('Id') and 5 bytea fields
> ('d0','d1',...,'d4').
> I populated the table with 10000 rows; each d.. field inizialized with 20
> bytes.
> Reported table size is 1.5MB. OK.
> Now, for 1000 times, I update 2000 different rows each time, changing d0
> filed keeping the same length, and at the end of all, I issued VACUUM.
> Now table size is 29MB.
>
> Why so big? What is an upper bound to estimate a table occupation on disk?
every (!) update creates a new row-version and marks the old row as
'old', but don't delete the old row.
A Vacuum marks old rows as reuseable - if there is no runnung
transaction that can see the old row-version. That's how MVCC works in
PostgreSQL.
Regards, Andreas Kretschmer
--
Andreas Kretschmer
http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Francisco Olarte | 2016-12-10 12:38:26 | Re: huge table occupation after updates |
Previous Message | Tom DalPozzo | 2016-12-10 12:21:28 | Re: SELECT slow immediately after many update or delete+insert, except using WHERE .. IN |