Sequential vs. random values - number of pages in B-tree

From: pinker <pinker(at)onet(dot)eu>
To: pgsql-general(at)postgresql(dot)org
Subject: Sequential vs. random values - number of pages in B-tree
Date: 2016-08-18 11:32:12
Message-ID: 1471519932518-5916956.post@n5.nabble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi!
After doing a quick test:
with sequential values:
create table t01 (id bigint);
create index i01 on t01(id);
insert into t01 SELECT s from generate_series(1,10000000) as s;

and random values:
create table t02 (id bigint);
create index i02 on t02(id);
insert into t02 SELECT random()*100 from generate_series(1,10000000) as s;

The page counts for tables remain the same:
relpages | relname
----------+--------------------------
44248 | t01
44248 | t02

But for indexes are different:
relpages | relname
----------+---------------------------------
27421 | i01
34745 | i02

Plus, postgres does 5 times more writes to disk with random data.
What's the reason that postgres needs more index pages to store random data
than sequential ones?

--
View this message in context: http://postgresql.nabble.com/Sequential-vs-random-values-number-of-pages-in-B-tree-tp5916956.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David G. Johnston 2016-08-18 11:39:30 Re: PGPASSWORD - More than one in a bash script
Previous Message Francisco Olarte 2016-08-18 11:05:33 Re: SQL help - multiple aggregates