From: | Amit Langote <amitlangote09(at)gmail(dot)com> |
---|---|
To: | ygnhzeus <ygnhzeus(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: How to specify/mock the statistic data of tables in PostgreSQL |
Date: | 2014-01-10 14:00:11 |
Message-ID: | CA+HiwqF6esra=thi5hcsMmfJ+YBoVTaC4AAC+3OV+aUj0YSP1w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Jan 10, 2014 at 6:00 PM, ygnhzeus <ygnhzeus(at)gmail(dot)com> wrote:
> Hi all,
>
> I want to use PostgreSQL to help me calculate the cardinality/selectivity of
> some queries, but I do not want to insert any data into these tables(since
> the data size is huge) to PostgreSQL. So I plan to calculate the statistic
> data by myself (not in PostgreSQL) and manually specify the metrics (maybe
> by modifying pg_statistic table) in PostgreSQL, thus PG's optimizer may use
> these statistic to evaluate the query (Explain...). Here comes the problem:
>
>
>
> 1. Is it possible to do what I've described above?
>
> 2. I've took a look at the pg_statistic table and pg_stats view, in the view
> I saw that most_common_elems/most_common_elem_freqs/elem_count_histogram
> were empty, and I'm also a little confused about the column called
> correlation. Is there any detailed document about how these metrics are
> calculated in PostgreSQL?
>
>
About correlation:
As you might know index on some column imparts a logical ordering (for
example, ascending) to table rows based on that column, but remember,
actual rows are not stored in the same physical order in the relation
file as the logical order. So, there's a random disk access penalty
when fetching individual rows from the heap (for example, range scans
that use index). "correlation" denotes how close these two orderings
are to each other.
A command called CLUSTER can be used to physically reorder a table's
rows to match the logical ordering imposed by some index on that
table. More about CLUSTER here:
http://www.postgresql.org/docs/9.3/static/sql-cluster.html
Consider following example,
postgres=# create table test as select generate_series(1,1000000) as a
order by random();
SELECT 1000000
postgres=# create index test_idx on test using a;
postgres=# create index test_idx on test using btree (a);
CREATE INDEX
postgres=# analyze test;
ANALYZE
postgres=# select correlation from pg_stats where tablename = 'test';
correlation
-------------
-0.00164016
(1 row)
postgres=# select count(*) from test where a between 34000 and 68000;
count
-------
34001
(1 row)
Time: 26.875 ms
Note here that the correlation is pretty close to zero meaning
physical ordering of rows is different than logical ordering imposed
by the index.
postgres=# cluster test using test_idx;
CLUSTER
This should put rows of the table into the same order as the index.
postgres=# analyze test;
ANALYZE
postgres=# select correlation from pg_stats where tablename = 'test';
correlation
-------------
1
(1 row)
postgres=# select count(*) from test where a between 34000 and 68000;
count
-------
34001
(1 row)
Time: 12.990 ms
Note here that now rows of the table are in almost same physical order
as its index thus reducing random disk accesses. Note how after
CLUSTER, time for same query reduces to half the time of original
unclustered case. This is due to reduced random disk access.
As to how the pg_stats statistics are used by the planner for row
estimation is described here:
http://www.postgresql.org/docs/9.3/static/row-estimation-examples.html
However, to understand how they are generated by ANALYZE (in most
cases, using random sampling), I guess you'd need to go through its
code in the source file "src/backend/commands/analyze.c".
--
Amit Langote
From | Date | Subject | |
---|---|---|---|
Next Message | ygnhzeus | 2014-01-10 14:12:49 | Re: How to specify/mock the statistic data of tables in PostgreSQL |
Previous Message | sramay | 2014-01-10 12:57:55 | Re: pg_largeobject related issue with 9.2 |