Re: Query RE: Optimising UUID Lookups

From: David Rowley <dgrowleyml(at)gmail(dot)com>
To: Roland Dunn <roland(dot)dunn(at)gmail(dot)com>
Cc: Maxim Boguk <maxim(dot)boguk(at)gmail(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Query RE: Optimising UUID Lookups
Date: 2015-03-24 07:49:46
Message-ID: CAApHDvouOYKYdaLgo4PaeL1sMWrXj0-fj0koWHyqtE5Yc4_cDA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 21 March 2015 at 23:34, Roland Dunn <roland(dot)dunn(at)gmail(dot)com> wrote:

>
> If we did add more RAM, would it be the effective_cache_size setting
> that we would alter? Is there a way to force PG to load a particular
> table into RAM? If so, is it actually a good idea?
>

Have you had a look at EXPLAIN (ANALYZE, BUFFERS) for the query?

Pay special attention to "Buffers: shared read=NNN" and "Buffers: shared
hit=NNN", if you're not reading any buffers between runs then the pages are
in the PostgreSQL shared buffers. By the looks of your config you have 10GB
of these. On the other hand if you're getting buffer reads, then they're
either coming from disk, or from the OS cache. PostgreSQL won't really know
the difference.

If you're not getting any buffer reads and it's still slow, then the
problem is not I/O

Just for fun... What happens if you stick the 50 UUIDs in some table,
analyze it, then perform a join between the 2 tables, using IN() or
EXISTS()... Is that any faster?

Also how well does it perform with: set enable_bitmapscan = off; ?

Regards

David Rowley

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Roland Dunn 2015-03-24 10:45:26 Re: Query RE: Optimising UUID Lookups
Previous Message Maxim Boguk 2015-03-24 05:28:07 Re: Query RE: Optimising UUID Lookups