Re: Querying a table with jaccard similarity with 1.6 million records take 12 seconds

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Michael Lewis <mlewis(at)entrata(dot)com>
Cc: Ninad Shah <nshah(dot)postgres(at)gmail(dot)com>, balasubramanian c r <crbs(dot)siebel(at)gmail(dot)com>, pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Querying a table with jaccard similarity with 1.6 million records take 12 seconds
Date: 2021-09-02 19:44:09
Message-ID: 2266571.1630611849@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Michael Lewis <mlewis(at)entrata(dot)com> writes:
> This is showing many false positives from the index scan that get removed
> when the actual values are examined. With such a long search parameter,
> that does not seem surprising. I would expect a search on "raj nagar
> ghaziabad 201017" or something like that to yield far fewer results from
> the index scan. I don't know GIN indexes super well, but I would guess that
> including words that are very common will yield false positives that get
> filtered out later.

Yeah, the huge "Rows Removed" number shows that this index is very
poorly adapted to the query. I don't think the problem is with GIN
per se, but with a poor choice of how to use it. The given example
looks like what the OP really wants to do is full text search.
If so, a GIN index should be fine as long as you put tsvector/tsquery
filtering in front of it. If that's not a good characterization of
the goal, it'd help to tell us what the goal is. (Just saying "I
want to use jaccard similarity" sounds a lot like a man whose only
tool is a hammer, therefore his problem must be a nail, despite
evidence to the contrary.)

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2021-09-02 19:51:33 Re: Upgrade 9.5 cluster on Ubuntu 16.04
Previous Message Ninad Shah 2021-09-02 19:43:05 Re: Querying a table with jaccard similarity with 1.6 million records take 12 seconds