From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Bradley McCune <bradley(dot)mccune(at)noaa(dot)gov> |
Cc: | David Welton <davidw(at)dedasys(dot)com>, Paul Tilles <paul(dot)tilles(at)noaa(dot)gov>, Postgres general mailing list <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: V8.4 TOAST table problem |
Date: | 2013-07-12 15:28:23 |
Message-ID: | CAOR=d=2wvdq+g3umDn3UgWX9Rtv+z0Yj2Z7N4SmOaNhqAeYooQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Did you have a long running trasnaction? Especially a prepared
transaction, blocking the vacuum from reclaiming the space?
On Fri, Jul 12, 2013 at 8:10 AM, Bradley McCune <bradley(dot)mccune(at)noaa(dot)gov> wrote:
> David,
>
> (As a preface, I have already gone forward with completely rebuilding the
> database which seems to have finally fixed the problem. Rebuilding the
> table itself had no effect, and I couldn't wait much longer to move
> forward.)
>
> Yes, this seems similar, however, the key difference being that VACUUM FULL
> did not alleviate the problem. The extra "bloated" disk space was still
> considered "in use" by the data server, and so it was never returned to the
> system. I have a suspicion that the server was storing the table data in
> pages in an inefficient manner (by unknown means) because we had roughly ~5x
> the number of pages used on that TOAST table to store the same number of
> tuples compared to other similar databases.
>
> Depending on how often you have to use VACUUM FULL, you might want to
> consider tweaking the autovacuum to be more aggressive on that hot table to
> keep it in check more often. (Recycling the disk space more efficiently
> rather than sending it back to the server only to be reallocated to the
> database again.)
>
>
> On Fri, Jul 12, 2013 at 4:09 AM, David Welton <davidw(at)dedasys(dot)com> wrote:
>>
>> Hi,
>>
>> I have a very similar problem... details below.
>>
>> On Wed, Jul 3, 2013 at 5:19 PM, Paul Tilles <paul(dot)tilles(at)noaa(dot)gov> wrote:
>> > Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order
>> > to
>> > take advantage of autovacuum features. This server exists in a very
>> > closed
>> > environment (isolated network, limited root privileges; this explains
>> > the
>> > older software in use) and runs on RHEL5.5 (i686). After the upgrade,
>> > the
>> > database has constantly been growing to the tune of 5-6 GB a day.
>> > Normally,
>> > the database, as a whole, is ~20GB; currently, it is ~89GB. We have a
>> > couple
>> > other servers which run equivalent databases and actually synchronize
>> > the
>> > records to each other via a 3rd party application (one I do not have
>> > access
>> > to the inner workings). The other databases are ~20GB as they should be.
>>
>> Our machine is an Ubuntu 12.04 system running on AWS, so it's a 64 bit
>> system:
>>
>> PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu, compiled by gcc
>> (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit
>>
>> > Running the following SQL, it's fairly obvious there's an issue with a
>> > particular table, and, more specifically, its TOAST table.
>>
>> Same thing here: we have a table with around 2-3 megs of data that is
>> blowing up to *10 gigs*.
>>
>> > This TOAST table is for a table called "timeseries" which saves large
>> > records of blobbed data. ASUM(LENGTH(blob)/1024./1024.) of all the
>> > records
>> > in timeseries yields ~16GB for that column. There should be [b]no
>> > reason[/b]
>> > this table's TOAST table should be as large as it is.
>>
>> Similar situation: it's a bytea column that gets "a lot" of updates;
>> in the order of 10's of thousands a day.
>>
>> > I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the vacuum
>> > runs
>> > to completion with no errors.
>>
>> VACUUM FULL fixes the problem for us by recouping all the wasted disk
>> space. I don't have the knowledge to investigate much further on my
>> own, but I'd be happy to try out a few things. The database is,
>> unfortunately, sensitive data that I can't share, but I could probably
>> script a similar situation...
>>
>> --
>> David N. Welton
>>
>> http://www.dedasys.com/
>
>
>
>
> --
> Bradley D. J. McCune
--
To understand recursion, one must first understand recursion.
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2013-07-12 15:39:40 | Re: transactional swap of tables |
Previous Message | Melvin Call | 2013-07-12 15:08:59 | Re: Distributed systems and primary keys |