A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

From: Aleksey Tsalolikhin <atsaloli(dot)tech(at)gmail(dot)com>
To: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?
Date: 2012-03-07 03:05:36
Message-ID: CA+jMWodz_6rhfCd9APKf3vkun-kp-gSW=-TgH_Fo+SPoQw2X8A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x

The origin database "data/base" directory is 197 GB in size.

The slave database "data/base" directory is 562 GB in size and is
over 75% filesystem utilization which has set off the "disk free" siren.

My biggest table* measures 154 GB on the origin, and 533 GB on
the slave. (*As reported by

SELECT relname as "Table", pg_size_pretty(pg_total_relation_size(relid))
As "Size" from pg_catalog.pg_statio_user_tables
ORDER BY pg_total_relation_size(relid) DESC;
)

I took a peek at this table on the slave using pgadmin3. The table
has auto-vacuum enabled, and TOAST autovacuum enabled.

There are 8.6 million live tuples, and 1.5 million dead tuples.

Last autovacuum was over a month ago.

Last autoanalyze was 3 hours ago.

Table size is 4 Gigs, and TOAST table size is 527 Gigs.
Indexes size is 3 Gigs.

Autovacuum threshold is 20%, and the table is just under that threshold.

I ran vacuum analyze verbose. But the filesystem is still at 76%
utilization.
In fact, now, the "data/base" directory has grown to 565 GB.

Why is my slave bigger than my master? How can I compact it, please?

Best,
Aleksey

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Joel Jacobson 2012-03-07 08:54:11 Synchronous replication + Fusion-io = waste of money OR significant performance boost? (compared to normal SATA-based SSD-disks)?
Previous Message Brian Trudal 2012-03-07 00:39:28 Re: Single server multiple databases - extension