From: | Vladimir Ryabtsev <greatvovan(at)gmail(dot)com> |
---|---|
To: | laurenz(dot)albe(at)cybertec(dot)at |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: Why could different data in a table be processed with different performance? |
Date: | 2018-09-21 06:28:27 |
Message-ID: | CAMqTPqk-kG7i_=TtDr57zTsBbP4-O51U_g-d5y4Hqssa6cEPbA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> Setting "track_io_timing = on" should measure the time spent doing I/O
more accurately.
I see I/O timings after this. It shows that 96.5% of long queries is spent
on I/O. If I subtract I/O time from total I get ~1,4 s for 5000 rows, which
is SAME for both ranges if I adjust segment borders accordingly (to match
~5000 rows). Only I/O time differs, and differs significantly.
> One problem with measuring read speed that way is that "buffers read" can
mean "buffers read from storage" or "buffers read from the file system
cache",
I understand, that's why I conducted experiments with drop_caches.
> but you say you observe a difference even after dropping the cache.
No, I say I see NO significant difference (accurate to measurement error)
between "with caches" and after dropping caches. And this is explainable, I
think. Since I read consequently almost all data from the huge table, no
cache can fit this data, thus it cannot influence significantly on results.
And whilst the PK index *could* be cached (in theory) I think its data is
being displaced from buffers by bulkier JSONB data.
Vlad
From | Date | Subject | |
---|---|---|---|
Next Message | Fabio Pardi | 2018-09-21 13:08:53 | Re: Why could different data in a table be processed with different performance? |
Previous Message | Laurenz Albe | 2018-09-21 03:17:26 | Re: Why could different data in a table be processed with different performance? |