From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Greg Spiegelberg <gspiegelberg(at)gmail(dot)com> |
Cc: | Mike Sofen <msofen(at)runbox(dot)com>, "pgsql-performa(dot)" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Millions of tables |
Date: | 2016-09-28 18:00:35 |
Message-ID: | 20160928180035.GX5148@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Greg,
* Greg Spiegelberg (gspiegelberg(at)gmail(dot)com) wrote:
> On Wed, Sep 28, 2016 at 11:27 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > * Greg Spiegelberg (gspiegelberg(at)gmail(dot)com) wrote:
> > > Bigger buckets mean a wider possibility of response times. Some buckets
> > > may contain 140k records and some 100X more.
> >
> > Have you analyzed the depth of the btree indexes to see how many more
> > pages need to be read to handle finding a row in 140k records vs. 14M
> > records vs. 140M records?
> >
> > I suspect you'd find that the change in actual depth (meaning how many
> > pages have to actually be read to find the row you're looking for) isn't
> > very much and that your concern over the "wider possibility of response
> > times" isn't well founded.
> Excellent feedback! Um, how does one look at tree depth in PostgreSQL?
> Oracle I know but have not done the same in PG. Pointers?
CREATE EXTENSION pageinspect;
SELECT * FROM bt_metap('indexname');
https://www.postgresql.org/docs/9.5/static/pageinspect.html
Thanks!
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Jake Nielsen | 2016-09-28 18:11:13 | Re: Unexpected expensive index scan |
Previous Message | Greg Spiegelberg | 2016-09-28 17:38:42 | Re: Millions of tables |