From: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com> |
---|---|
To: | "Thomas F(dot) O'Connell" <tfo(at)sitening(dot)com> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: vacuumdb vs. max_connections: SELECT waiting |
Date: | 2006-05-15 15:07:53 |
Message-ID: | 20060515150752.GC26212@pervasive.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Sat, May 13, 2006 at 03:47:00AM -0500, Thomas F. O'Connell wrote:
>
> On May 13, 2006, at 12:35 AM, Tom Lane wrote:
>
> >VACUUM FULL does all right at packing the table (except in
> >pathological
> >cases, eg a very large tuple near the end of the table). It mostly
> >bites as far as shrinking indexes goes, however. If you've got a
> >serious index bloat problem then REINDEX is the only solution.
> >CLUSTER
> >does an automatic REINDEX after compacting the table --- it doesn't
> >have
> >any special properties as far as the index space goes. However, if
> >you've got serious table bloat then CLUSTER will probably be quicker
> >than VACUUM FULL.
>
> So my reading of VACUUM VERBOSE output leads me to believe that I
> could actually interpret both table and index bloat for the entire
> database from it (and suggests a useful reporting tool...).
>
> This is from the docs:
>
> INFO: index "onek_unique1" now contains 1000 tuples in 14 pages
> DETAIL: 3000 index tuples were removed.
> 0 index pages have been deleted, 0 are currently reusable.
>
> Does this imply that 75% of the tuples in this index were free space?
> Even so, since this is an index, the pages aren't shrunk, per your
> note above, and a REINDEX would be required for reclamation, right?
Correct.
> And, then, later:
>
> INFO: "onek": removed 3000 tuples in 108 pages
> DETAIL: CPU 0.01s/0.06u sec elapsed 0.07 sec.
> INFO: "onek": found 3000 removable, 1000 nonremovable tuples in 143
> pages
> DETAIL: 0 dead tuples cannot be removed yet.
>
> Which implies that this table was also 75% bloated? And a VACUUM FULL
> (or CLUSTER) could improve this even more than the VACUUM VERBOSE
> ANALYZE from the example in the docs?
Also correct. There are some tricks you can do to shrink the table
without resorting to a vacuum full or a cluster though, but it's a real
PITA.
Also, remember that if your database is growing it will eventually end
up reusing all that free space.
> Unfortunately, there are several seqscan-using queries and several
> large tables (with the largest currently approaching 13% of physical
> memory).
So are you saying that the entire table fits in memory? Does the entire
database fit in memory? If so, you're unlikely to find huge gains from
vacuuming, unless you're really pushing the system hard.
> I'm actually trying to prioritize administrative operations that
> could result in noticeable performance gains. If compacting tables
> and indexes turns out to be low on the pole in terms of performance
> considerations, then I'm inclined to look elsewhere, especially
> considering the administrative headache (from the perspective of the
> related application) required to undertake a database-wide CLUSTER or
> VACUUM FULL + REINDEX.
It depends. On badly bloated systems I've seen 20-50% improvements from
setting up an adequate vacuum plan.
--
Jim C. Nasby, Sr. Engineering Consultant jnasby(at)pervasive(dot)com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2006-05-15 15:08:33 | Re: Copy HL7 record/file into PostgreSQL Table |
Previous Message | Jim C. Nasby | 2006-05-15 14:59:54 | Re: Copy HL7 record/file into PostgreSQL Table |