Re: using shared_buffers during seq_scan

From: Albe Laurenz <laurenz(dot)albe(at)wien(dot)gv(dot)at>
To: "'Artem Tomyuk *EXTERN*'" <admin(at)leboutique(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: using shared_buffers during seq_scan
Date: 2016-03-18 10:30:17
Message-ID: A737B7A37273E048B164557ADEF4A58B5381EB79@ntex2010a.host.magwien.gv.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Artem Tomyuk wrote:
> Is Postgres use shared_buffers during seq_scan?
> In what way i can optimize seq_scan on big tables?

If the estimated table size is less than a quarter of shared_buffers,
the whole table will be read to the shared buffers during a sequential scan.

If the table is larger than that, it is scanned using a ring
buffer of 256 KB inside the shared buffers, so only 256 KB of the
table end up in cache.

You can speed up all scans after the first one by having lots of RAM.
Even if you cannot set shared_buffers four times as big as the table,
you can profit from having a large operating system cache.

Yours,
Laurenz Albe

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Jan Bauer Nielsen 2016-03-18 13:26:14 Performance decline maybe caused by multi-column index?
Previous Message Mike Sofen 2016-03-17 21:11:17 Re: Disk Benchmarking Question