From: | Tomas Vondra <tv(at)fuzzy(dot)cz> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Query Optimizer makes a poor choice |
Date: | 2011-11-29 23:14:22 |
Message-ID: | 4ED5674E.4040601@fuzzy.cz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 29.11.2011 22:43, Tyler Hains wrote:
> There are actually more like 27 million rows in the table. That's why it
> really should be filtering the rows using the index on the other column
> before ordering for the limit.
Well, the problem is that the PostgreSQL MVCC model is based on keeping
copies of the row. When you delete a row, it's actually marked as
deleted so that running transactions can still see it. An update is just
a delete+insert, so the consequences are the same.
This means there may be a lot of dead rows - easily orders of magnitude
more than there should be. So instead of 27 million rows the table may
actually contain 270 million.
That's what (auto)vacuum is for - it reclaims the space occupied by dead
rows, because there are no transaction that can see them. This space is
then used for new rows (either created by INSERT or UPDATE).
But if the autovacuum can't keep pace with the changes, e.g. because
you've repeatedly run a full-table update or because the table is
updated heavily and the autovacuum is not aggressive enough, you got a
problem.
And this affects indexes too - each new row (or a copy of a row) needs a
new record in the index. Unless it's a HOT update, but let's not
complicate that. And this space is not reclaimed by plain (auto)vacuum,
so you may have a perfectly healthy table and bloated index.
Check the size of your table and indexes, see if it matches your
expectations. E.g. create a small table with 10000 rows and compute how
large would the table be with 27 million rows (just multiply by 2700).
Does that match the current size? Same thing for the index.
And run the three queries I've posted in my previous post - that should
give you more details.
You may also use pgstattuple contrib module - run this
select * from pgstattuple('cards');
select * from pgstatindex('cards_pkey');
High values of dead_tuple_percent/free_percent (for a table) or
leaf_fragmentation (index) and low avg_leaf_density (index) usually mean
there's a bloat.
But be careful - this actually reads the whole table / index.
> The documentation does not seem to give a clear reason for changing the
> value used in default_statistics_target or why you would override it
> with ALTER TABLE SET STATISTICS. My gut is telling me that this may be
> our answer if we can figure out how to tweak it.
That affects the estimates - when the distribution is skewed the default
detail may not be sufficient for estimate precise enough, so the
optimizer chooses bad plans. Increasing the statistics target means
"collect more detailed statistics" and that often helps to fix the
issues. But I think this is not the case. I'd guess the bloat.
Tomas
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2011-11-29 23:15:39 | Re: Query Optimizer makes a poor choice |
Previous Message | Tomas Vondra | 2011-11-29 22:59:17 | Re: Query Optimizer makes a poor choice |