From: | Tom DalPozzo <t(dot)dalpozzo(at)gmail(dot)com> |
---|---|
To: | Rob Sargent <robjsargent(at)gmail(dot)com> |
Cc: | Francisco Olarte <folarte(at)peoplecall(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: huge table occupation after updates |
Date: | 2016-12-10 15:27:36 |
Message-ID: | CAK77FCThRptU8Coi-Ckv6xVmOHQtxHCYMsjvvDSxmUPomy3tLQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I'd like to do that! But my DB must be crash proof! Very high reliability
is a must.
I also use sycn replication.
Regards
Pupillo
2016-12-10 16:04 GMT+01:00 Rob Sargent <robjsargent(at)gmail(dot)com>:
>
> > On Dec 10, 2016, at 6:25 AM, Tom DalPozzo <t(dot)dalpozzo(at)gmail(dot)com> wrote:
> >
> > Hi,
> > you're right, VACUUM FULL recovered the space, completely.
> > So, at this point I'm worried about my needs.
> > I cannot issue vacuum full as I read it locks the table.
> > In my DB, I (would) need to have a table with one bigint id field+ 10
> bytea fields, 100 bytes long each (more or less, not fixed).
> > 5/10000 rows maximum, but let's say 5000.
> > As traffic I can suppose 10000 updates per row per day (spread over
> groups of hours; each update involving two of those fields, randomly.
> > Also rows are chosen randomly (in my test I used a block of 2000 just to
> try one possibility).
> > So, it's a total of 50 millions updates per day, hence (50millions * 100
> bytes *2 fields updated) 10Gbytes net per day.
> > I'm afraid it's not possible, according to my results.
> > Reagrds
> > Pupillo
> >
>
> Are each of the updates visible to a user or read/analyzed by another
> activity? If not you can do most of the update in memory and flush a
> snapshot periodically to the database.
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Rob Sargent | 2016-12-10 15:36:30 | Re: huge table occupation after updates |
Previous Message | Rob Sargent | 2016-12-10 15:04:42 | Re: huge table occupation after updates |