From: | Andreas Kretschmer <andreas(at)a-kretschmer(dot)de> |
---|---|
To: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: Searching in varchar column having 100M records |
Date: | 2019-07-17 13:00:38 |
Message-ID: | 5322aa5e-9913-5471-7254-c5fff6c09146@a-kretschmer.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Am 17.07.19 um 14:48 schrieb Tomas Vondra:
> Either that, or try creating a covering index, so that the query can
> do an
> index-only scan. That might reduce the amount of IO against the table,
> and
> in the index the data should be located close to each other (same page or
> pages close to each other).
>
> So try something like
>
> CREATE INDEX ios_idx ON table (field, user_id);
>
> and make sure the table is vacuumed often enough (so that the visibility
> map is up to date).
yeah, and please don't use varchar(64), but instead UUID for the user_id
- field to save space on disk and for faster comparison.
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com
From | Date | Subject | |
---|---|---|---|
Next Message | David G. Johnston | 2019-07-17 13:57:27 | Re: Searching in varchar column having 100M records |
Previous Message | Tomas Vondra | 2019-07-17 12:48:46 | Re: Searching in varchar column having 100M records |