Re: Query RE: Optimising UUID Lookups

From: Maxim Boguk <maxim(dot)boguk(at)gmail(dot)com>
To: Roland Dunn <roland(dot)dunn(at)gmail(dot)com>
Cc: postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Query RE: Optimising UUID Lookups
Date: 2015-03-24 05:28:07
Message-ID: CAK-MWwRwLqZ+AndcaYGoYTQOj1n_BhhcD+sSLtLs3x26KNkCsA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

​Hi Roland,​

> Do you speculate that the 10,000 rows would be randomly spread because
> of the uuid-type that we chose, namely the uuid-4 type? i.e. the
> completely random one? If we'd chosen the uuid-1 type (mac
> address+timestamp), rows would have been more regularly placed and so
> faster to pull back? Just curious as to why you said the randomly
> spaced. Also bear in mind that we did experiment with both btree and
> hash index on the uuid column.
>

​No, I mean that the data corresponding to 10000 UUIDS very likely random
distributed over the table, and as a result over HDD.
So getting each single row mean 1 seek on HDD which usually took 5-10ms.
You will see the same issue with integer type (or any other type) as well.

Btw, good test for this theory is execute the same query few time in short
period of time and see if the second and later runs become faster.

>
> RE: increasing the memory. Currently at 64GB, with following conf settings:
>
> max_connections = 100
> shared_buffers = 10GB
> effective_cache_size = 45GB
> work_mem = 100MB
> maintenance_work_mem = 1GB
> checkpoint_segments = 128
> checkpoint_completion_target = 0.9
> wal_buffers = 16MB
>
> Is it worth (do you think) experimenting with work_mem, and if so to
> what degree?
>

​work_mem doesn't help there.​

> If we did add more RAM, would it be the effective_cache_size setting
> that we would alter?

​Yep.​

> Is there a way to force PG to load a particular
> table into RAM? If so, is it actually a good idea?
>

​There are no such way except setting the shared buffers equal or bigger
than the database size (if you have enough RAM of course).​
If the table being accessed quite actively and there are not a lot of
memory pressure on the database - than table will be in RAM anyway after
some time.

--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b
Skype: maxim.boguk
Jabber: maxim(dot)boguk(at)gmail(dot)com
МойКруг: http://mboguk.moikrug.ru/

"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message David Rowley 2015-03-24 07:49:46 Re: Query RE: Optimising UUID Lookups
Previous Message Vladimir Borodin 2015-03-23 14:45:12 Re: [GENERAL] [pgadmin-support] Issue with a hanging apply process on the replica db after vacuum works on primary