Is http://www.postgresql.org/docs/9.3/static/row-estimation-examples.html up to date?

From: Torsten Förtsch <torsten(dot)foertsch(at)gmx(dot)net>
To: Postgres General <pgsql-general(at)postgresql(dot)org>
Subject: Is http://www.postgresql.org/docs/9.3/static/row-estimation-examples.html up to date?
Date: 2014-01-15 20:40:36
Message-ID: 52D6F244.3080308@gmx.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,

I am asking because ...

I have a table with

relpages | 19164
reltuples | 194775

pg_relation_size / 8192 yields the same number as relpages. So, there is
no need to scale reltuples. Relcardinality is therefore 194775.

Statistics target is the default, 100. So, I assume each of the 100
buckets contains 1947.75 tuples.

Now, I have a timestamp column and a query for col>'2013-01-01'. There
are 27 buckets out of the 100 where col is >'2013-01-01'. The bucket
boundaries where 2013-01-01 falls into are

hist | 2013-01-08 20:48:52
hist | 2012-12-13 12:36:30

There is no / operation for INTERVAL types. So, I calculate in seconds:

select (27
+ extract('epoch' from
'2013-01-08 20:48:52'::timestamp
- '2013-01-01'::timestamp)
/ extract('epoch' from
'2013-01-08 20:48:52'::timestamp
- '2012-12-13 12:36:30'::timestamp))
* 1947.75;

That results in 53170.9642980797 and would be rounded to 53171.

However, EXPLAIN shows:

-> Seq Scan on client (cost=0.00..21731.03 rows=52939 width=29)
Filter: (date_joined > '2013-01-01 00:00:00'::timestamp
without time zone)

The numbers are of the same number of magnitude, but they are too
different to be rounding errors.

So, what did I wrong?

Thanks,
Torsten

Browse pgsql-general by date

  From Date Subject
Next Message Lists 2014-01-16 00:09:28 vacuum vs pg_repack for clearing bloat?
Previous Message Alan Nilsson 2014-01-15 20:10:27 Re: pg_basebackup failing