Re: Searching in varchar column having 100M records

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: mayank rupareliya <mayankjr03(at)gmail(dot)com>
Cc: pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: Searching in varchar column having 100M records
Date: 2019-07-19 18:39:46
Message-ID: 20190719183946.ls5vmus7ukhbl63w@development
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Fri, Jul 19, 2019 at 07:43:26PM +0530, mayank rupareliya wrote:
>Well, you haven't shown us the execution plan, so it's hard to check why
>it did not help much and give you further advice.
>
>
>This is the latest query execution with explain after adding indexing on
>both columns.
>
>Aggregate (cost=174173.57..174173.58 rows=1 width=8) (actual
>time=65087.657..65087.658 rows=1 loops=1)
> -> Bitmap Heap Scan on fields (cost=1382.56..174042.61 rows=52386
>width=0) (actual time=160.340..65024.533 rows=31857 loops=1)
> Recheck Cond: ((field)::text = 'Champlin'::text)
> Heap Blocks: exact=31433
> -> Bitmap Index Scan on index_field (cost=0.00..1369.46
>rows=52386 width=0) (actual time=125.078..125.079 rows=31857 loops=1)
> Index Cond: ((field)::text = 'Champlin'::text)
>Planning Time: 8.595 ms
>Execution Time: 65093.508 ms
>

That very clearly does not use the index-only scan, so it's not
surprising it's not any faster. You need to find out why the planner
makes that decision.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message mgbii bax 2019-07-19 20:51:36 Re: Searching in varchar column having 100M records
Previous Message Michael Lewis 2019-07-19 17:33:41 Re: Searching in varchar column having 100M records