Re: Searching in varchar column having 100M records

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Sergei Kornilov <sk(at)zsrv(dot)org>
Cc: mayank rupareliya <mayankjr03(at)gmail(dot)com>, "pgsql-performance(at)lists(dot)postgresql(dot)org" <pgsql-performance(at)lists(dot)postgresql(dot)org>
Subject: Re: Searching in varchar column having 100M records
Date: 2019-07-17 12:48:46
Message-ID: 20190717124846.xbvcvfjg5hdcnaed@development
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Wed, Jul 17, 2019 at 02:53:20PM +0300, Sergei Kornilov wrote:
>Hello
>
>Please recheck with track_io_timing = on in configuration. explain
>(analyze,buffers) with this option will report how many time we spend
>during i/o
>
>> Buffers: shared hit=2 read=31492
>
>31492 blocks / 65 sec ~ 480 IOPS, not bad if you are using HDD
>
>Your query reads table data from disks (well, or from OS cache). You need
>more RAM for shared_buffers or disks with better performance.
>

Either that, or try creating a covering index, so that the query can do an
index-only scan. That might reduce the amount of IO against the table, and
in the index the data should be located close to each other (same page or
pages close to each other).

So try something like

CREATE INDEX ios_idx ON table (field, user_id);

and make sure the table is vacuumed often enough (so that the visibility
map is up to date).

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Andreas Kretschmer 2019-07-17 13:00:38 Re: Searching in varchar column having 100M records
Previous Message Sergei Kornilov 2019-07-17 11:53:20 Re: Searching in varchar column having 100M records