Re: Why could different data in a table be processed with different performance?

From: Vladimir Ryabtsev <greatvovan(at)gmail(dot)com>
To: f(dot)pardi(at)portavita(dot)eu
Cc: pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: Why could different data in a table be processed with different performance?
Date: 2018-09-28 09:56:24
Message-ID: CAMqTPqkYs+sfikTkzDuBd8yXX7+pMThW1ErfBYzw-yLRT=9Y7Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> You will have lesser
> slots in the cache, but the total available cache will indeed be
> unchanged (half the blocks of double the size).
But we have many other tables, queries to which may suffer from smaller
number of blocks in buffer cache.

> To change block size is a
> painful thing, because IIRC you do that at db initialization time
My research shows that I can only change it in compile time.
https://www.postgresql.org/docs/10/static/install-procedure.html
And then initdb a new cluster...
Moreover, this table/schema is not the only in the database, there is a
bunch of other schemas. And we will need to dump-restore everything... So
this is super-painful.

> It could affect space storage, for the smaller blocks.
But at which extent? As I understand it is not something about "alignment"
to block size for rows? Is it only low-level IO thing with datafiles?

> But before going through all this, I would first try to reload the data
> with dump+restore into a new machine, and see how it behaves.
Yes, this is the plan, I'll be back once I find enough disk space for my
further experiments.

Vlad

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message David Rowley 2018-09-28 10:32:49 Re: To keep indexes in memory, is large enough effective_cache_size enough?
Previous Message Vladimir Ryabtsev 2018-09-28 09:16:46 Re: Why could different data in a table be processed with different performance?