From: | pgboy(at)guthrie(dot)charm(dot)net |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: storage calculations |
Date: | 2003-07-31 12:51:09 |
Message-ID: | Pine.LNX.4.50.0307310844001.30174-100000@guthrie.charm.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Thu, 31 Jul 2003, Peter Eisentraut wrote:N
> pgboy(at)guthrie(dot)charm(dot)net writes:N
> > sorry, when i referred to "disk space used" i meant the actual amount
> > used by live rows. if i insert 1000 rows then delete 999 of them, the
> > disk file will be the size of the 100 row container (make that 1000, not
> > 100 - i cannot backspace here) until i do a "vacuum full" - which does a
> > table lock, which is a bad thing.
>
> The task that you originally described was that you want to monitor when
> the disk is getting full. For that task, you need to take into account
> the actual size of the data on disk, not the size after a "vacuum full"
> which you say you do not want to execute. Basing a disk full monitor on
> hypothetical sizes sounds pretty useless.
>
> > given that, i'd like to try to be able to calculate the number of
> > bytes a row uses given its schema. i've seen this kind of
> > documentation for other dbs, i just cannot seem to find it
> > in the postgresql docs.
>
> There is some information about that in the FAQ, but keep in mind that
> rows may be compressed or moved to secondary storage automatically.
well, i can admit that i am confused. my assumption is that when a row
is deleted in pg, that row is merely marked as 'gone' until a vacuum-full
is done. my further assumption is that if i continue to add rows, those
rows are not necessarily appended to the end of the physical data file,
but can be written over 'deleted' rows.
given that, a vacuum-full is the only way i know of to get an accurate
reflection of the number of bytes being used to store the data. without
the vacuum, i can tell how big a potentially sparse file is, but i don't
want to raise a warning just because the file size is getting large
(unless, of course, pg dos not overwrite deleted rows, in which case the
warning, or a vaccum-full, seems appropriate.
i think i agree with you, too, that i cannot really calculate a
hypothetical size, unless i have all fixed-sized fields. in that case,
i should be able to accurately calculate the size, yes? if not, what
are the variables i could not account for?
uh, any more info on your comment "rows may be compressed or moved
to secondary storage automatically." i'd *love* to know how to do
that.
thanks.
pgboy
From | Date | Subject | |
---|---|---|---|
Next Message | Mauri Sahlberg | 2003-07-31 13:01:09 | Parallel transactions failing oddly |
Previous Message | Peter Eisentraut | 2003-07-31 09:01:24 | Re: storage calculations |