Re: large number dead tup - Postgres 9.5

From: Akash Bedi <abedi0501(at)gmail(dot)com>
To: Francisco Olarte <folarte(at)peoplecall(dot)com>
Cc: Patrick B <patrickbakerbr(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: large number dead tup - Postgres 9.5
Date: 2016-09-12 11:30:49
Message-ID: CALEOz1HG2Ovg_MsgNN0s_V9a=XvUpM3WF1Ph1xJw6VHpT_7EuA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Note that a VACUUM wouldn't be able to remove the dead rows if there's a
long running active query OR any idle transaction in an isolation >=
Repeatable Read, tracking transactions in "pg_stat_activity" should help
you eliminate/track this activity. Also, the row estimates consider the
size of your table, so it isn't necessary that close estimates indicate an
ANALYZE operation performed, a better way to track this would be monitoring
results from "pg_stat_user_tables", tracking when was did the
autovacuum/analyze last performed on this table

Regards,
Akash

On Mon, Sep 12, 2016 at 4:36 PM, Francisco Olarte <folarte(at)peoplecall(dot)com>
wrote:

> Hi:
>
> On Mon, Sep 12, 2016 at 1:17 AM, Patrick B <patrickbakerbr(at)gmail(dot)com>
> wrote:
> >> schemaname relname n_live_tup n_dead_tup
> >> ---------- ------------- ---------- ----------
> >> public parts 191623953 182477402
> ...
> > Because of that the table is very slow...
> > When I do a select on that table it doesn't use an index, for example:
> > \d parts;
> >> "index_parts_id" btree (company_id)
> >> "index_parts_id_and_country" btree (company_id, country)
> > explain select * from parts WHERE company_id = 12;
> >> Seq Scan on parts (cost=0.00..6685241.40 rows=190478997 width=223)
> >> Filter: (company_id = 12)
>
> You've already been directed to check table is really getting vacuumed
> / analyzed, but I'd like to point that if the count estimates are
> nearly correct that plan is good ( it's estimating getting more than
> 99% of the table, a seq scan tends to beat index scan easily when
> selecting that big part of the table, even accounting for dead tuples
> it's more about 50% of the table, and a seq scan is much faster PER
> TUPLE then an index scan ( and and index scan would likely touch every
> data page for that big fraction, so reading all of them sequentially
> and oing a quick filter is easier )).
>
> Francisco Olarte.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jeff Janes 2016-09-12 12:06:35 Re: 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2
Previous Message Francisco Olarte 2016-09-12 11:06:43 Re: large number dead tup - Postgres 9.5