Re: Huge shared hit for small table

From: Justin Pryzby <pryzby(at)telsasoft(dot)com>
To: Scott Rankin <srankin(at)motus(dot)com>
Cc: "pgsql-performance(at)lists(dot)postgresql(dot)org" <pgsql-performance(at)lists(dot)postgresql(dot)org>
Subject: Re: Huge shared hit for small table
Date: 2019-11-04 19:56:49
Message-ID: 20191104195649.GB4999@telsasoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, Nov 04, 2019 at 07:38:40PM +0000, Scott Rankin wrote:
> In the staging environment, we get this:
>
> Index Scan using "programPK" on public.program prog (cost=0.29..0.35 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=21965)
> Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type
> Index Cond: (prog.id = per.program)
> Buffers: shared hit=87860
>
> In the production environment, we get this:
>
> Index Scan using "programPK" on public.program prog (cost=0.29..0.36 rows=1 width=16) (actual time=0.017..4.251 rows=1 loops=21956)
> Output: prog.id, prog.version, prog.active, prog.created_date, prog.last_modified_date, prog.created_by, prog.last_modified_by, prog.client_id, prog.scheme_id, prog.name, prog.legacy_group_id, prog.custom_fields, prog.setup_complete, prog.setup_messages, prog.legacy_program_type
> Index Cond: (prog.id = per.program)
> Buffers: shared hit=25437716
>
> The tables in both environments are about the same size (18MB) and the indexes are about the same size (360kb/410kb) – and the shared hits are pretty much the same on the other nodes of the query between the two environments.

I think it's because some heap pages are being visited many times, due to the
index tuples being badly "fragmented". Note, I'm not talking about
fragmentation of index *pages*, which is what pgstattuple reports (which
wouldn't have nearly so detrimental effect). I could probably say that the
index tuples are badly "correlated" with the heap.

I'm guessing there are perhaps 25437716/87860 = 290 index tuples per page, and
they rarely point to same heap page as their siblings. "Hit" means that this
affects you even though it's cached (by postgres). So this is apparently slow
due to reading each page ~300 times rather than once to get its tuples all at
once.

> This has happened one time before, and we did a “REINDEX” on the program table – and that made the problem mostly go away. Now it seems to be back, and I’m not sure what to make of it.

..which is consistent with my hypothesis.

You can use pg_repack or CREATE INDEX+DROP+RENAME hack (which is what pg_repack
-i does). In a fresh index, its tuples are sorted by heap TID. You could
CLUSTER the table itself (or pg_repack -t) on that index column.

In PG v12 you can use REINDEX CONCURRENTLY (but beware there's a crash
affecting its progress reporting, fix to be included in v12.1).

Justin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Rankin 2019-11-04 19:56:57 Re: Huge shared hit for small table
Previous Message Andres Freund 2019-11-04 19:46:29 Re: Huge shared hit for small table