From: | Tom DalPozzo <t(dot)dalpozzo(at)gmail(dot)com> |
---|---|
To: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | huge table occupation after updates |
Date: | 2016-12-10 12:15:23 |
Message-ID: | CAK77FCTaLAck4JK5GYOdf3x==n-Pzy3k9X=nM0_M+zM7RukxhQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I've a table ('stato') with an indexed bigint ('Id') and 5 bytea fields
('d0','d1',...,'d4').
I populated the table with 10000 rows; each d.. field inizialized with 20
bytes.
Reported table size is 1.5MB. OK.
Now, for 1000 times, I update 2000 different rows each time, changing d0
filed keeping the same length, and at the end of all, I issued VACUUM.
Now table size is 29MB.
Why so big? What is an upper bound to estimate a table occupation on disk?
The same test, redone with dX length=200 bytes instead of 20 reports:
Size before UPDATES = 11MB. OK.
Size after UPDATES = 1.7GB . Why?
Attached a txt file with details of statistical command I issued (max of
row size, rows count etc....)
Regards
Pupillo
Attachment | Content-Type | Size |
---|---|---|
report huge table.txt | text/plain | 8.4 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom DalPozzo | 2016-12-10 12:21:28 | Re: SELECT slow immediately after many update or delete+insert, except using WHERE .. IN |
Previous Message | Melvin Davidson | 2016-12-10 02:45:35 | Re: Index size |