Re: Searching in varchar column having 100M records

From: "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>
To: mayank rupareliya <mayankjr03(at)gmail(dot)com>
Cc: pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: Searching in varchar column having 100M records
Date: 2019-07-17 13:57:27
Message-ID: CAKFQuwZ9CnUXB8O+yyOPiEkS5DBDcMPy-Ag1MxmRP2XjYDpTTA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Wed, Jul 17, 2019 at 4:04 AM mayank rupareliya <mayankjr03(at)gmail(dot)com>
wrote:

> create table fields(user_id varchar(64), field varchar(64));
> CREATE INDEX index_field ON public.fields USING btree (field);
>
> Any suggestions for improvement?
>

Reduce the number of rows by constructing a relationally normalized data
model.

David J.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Hugh Ranalli 2019-07-17 14:29:28 Re: Perplexing, regular decline in performance
Previous Message Andreas Kretschmer 2019-07-17 13:00:38 Re: Searching in varchar column having 100M records