Re: Searching in varchar column having 100M records

From: Sergei Kornilov <sk(at)zsrv(dot)org>
To: mayank rupareliya <mayankjr03(at)gmail(dot)com>, "pgsql-performance(at)lists(dot)postgresql(dot)org" <pgsql-performance(at)lists(dot)postgresql(dot)org>
Subject: Re: Searching in varchar column having 100M records
Date: 2019-07-17 11:53:20
Message-ID: 27574631563364400@iva5-fb4da115b4b5.qloud-c.yandex.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hello

Please recheck with track_io_timing = on in configuration. explain (analyze,buffers) with this option will report how many time we spend during i/o

> Buffers: shared hit=2 read=31492

31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD

Your query reads table data from disks (well, or from OS cache). You need more RAM for shared_buffers or disks with better performance.

regards, Sergei

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tomas Vondra 2019-07-17 12:48:46 Re: Searching in varchar column having 100M records
Previous Message mayank rupareliya 2019-07-17 11:03:41 Searching in varchar column having 100M records