From: | David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> |
---|---|
To: | "samruohola(at)yahoo(dot)com" <samruohola(at)yahoo(dot)com> |
Cc: | "pgsql-performance(at)lists(dot)postgresql(dot)org" <pgsql-performance(at)lists(dot)postgresql(dot)org>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
Subject: | Re: To keep indexes in memory, is large enough effective_cache_size enough? |
Date: | 2018-09-28 10:32:49 |
Message-ID: | CAKJS1f_gy71UN2V07qN5G22Vcvz7hHoVGjYMrGvtquWdDxkeFg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 28 September 2018 at 16:45, Sam R. <samruohola(at)yahoo(dot)com> wrote:
> That was what I was suspecting a little. Double buffering may not matter in
> our case, because the whole server is meant for PostgreSQL only.
>
> In our case, we can e.g. reserve almost "all memory" for PostgreSQL (shared
> buffers etc.).
>
> Please correct me if I am wrong.
You mentioned above:
> RAM: 64 GB
> Data: 500 GB - 1.5 TB, for example.
If most of that data just sits on disk and is never read then you
might be right, but if the working set of the data is larger than RAM
then you might find you get better performance from smaller shared
buffers.
I think the best thing you can go and do is to go and test this. Write
some code that mocks up a realistic production workload and see where
you get the best performance.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Fabio Pardi | 2018-09-28 14:15:47 | Re: Why could different data in a table be processed with different performance? |
Previous Message | Vladimir Ryabtsev | 2018-09-28 09:56:24 | Re: Why could different data in a table be processed with different performance? |