From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Victor Giannakouris - Salalidis <victorasgs(at)gmail(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Statistics Injection |
Date: | 2016-07-02 14:35:48 |
Message-ID: | 30138.1467470148@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Victor Giannakouris - Salalidis <victorasgs(at)gmail(dot)com> writes:
> For some research purposes, I am trying to modify the existing statistics
> of some tables in the catalogs in order to change the execution plan,
> experiment with the EXPLAIN call etc.
> Concretely, what I'd like to do is to create a "fake" table with a schema
> of my choice (that's the easy part) and then modify the
> statistics(particularly, the number of tuples and the number of pages).
> Firstly, I create an empty table (CREATE TABLE newTable(....)) and then I
> update the pg_class table as well (UPDATE pg_class SET relpages = #pages
> WHERE relname='newTable').
> The problem is that, even if I set the reltuples and relpages of my choice,
> when I run the EXPLAIN clause for a query in which the 'newTable' is
> involved in (e.g. EXPLAIN SELECT * FROM newTable), I get the same cost and
> row estimation.
You can't really do it like that, because the planner always looks at
the true relation size (RelationGetNumberOfBlocks()). It uses
reltuples/relpages as an estimate of tuple density, not as hard numbers.
The reason for this is to cope with any table growth that may have
occurred since the last VACUUM/ANALYZE.
You could modify the code in plancat.c to change that, or you could
plug into the get_relation_info_hook to tweak the constructed
RelOptInfo before anything is done with it.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2016-07-02 14:42:55 | Re: Docs, backups, and MS VSS |
Previous Message | Craig Ringer | 2016-07-02 14:31:32 | Docs, backups, and MS VSS |