From: | Francisco Olarte <folarte(at)peoplecall(dot)com> |
---|---|
To: | Patrick B <patrickbakerbr(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: large number dead tup - Postgres 9.5 |
Date: | 2016-09-12 11:06:43 |
Message-ID: | CA+bJJbxm0FD-1s9S6GSnJLO7hgnKAGShG+iXV53gghLQEXO+FQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi:
On Mon, Sep 12, 2016 at 1:17 AM, Patrick B <patrickbakerbr(at)gmail(dot)com> wrote:
>> schemaname relname n_live_tup n_dead_tup
>> ---------- ------------- ---------- ----------
>> public parts 191623953 182477402
...
> Because of that the table is very slow...
> When I do a select on that table it doesn't use an index, for example:
> \d parts;
>> "index_parts_id" btree (company_id)
>> "index_parts_id_and_country" btree (company_id, country)
> explain select * from parts WHERE company_id = 12;
>> Seq Scan on parts (cost=0.00..6685241.40 rows=190478997 width=223)
>> Filter: (company_id = 12)
You've already been directed to check table is really getting vacuumed
/ analyzed, but I'd like to point that if the count estimates are
nearly correct that plan is good ( it's estimating getting more than
99% of the table, a seq scan tends to beat index scan easily when
selecting that big part of the table, even accounting for dead tuples
it's more about 50% of the table, and a seq scan is much faster PER
TUPLE then an index scan ( and and index scan would likely touch every
data page for that big fraction, so reading all of them sequentially
and oing a quick filter is easier )).
Francisco Olarte.
From | Date | Subject | |
---|---|---|---|
Next Message | Akash Bedi | 2016-09-12 11:30:49 | Re: large number dead tup - Postgres 9.5 |
Previous Message | Moreno Andreo | 2016-09-12 09:14:45 | Re: Question about locking and pg_locks |