From: | Kohei KaiGai <kaigai(at)kaigai(dot)gr(dot)jp> |
---|---|
To: | PgHacker <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Does larger i/o size make sense? |
Date: | 2013-08-22 19:53:37 |
Message-ID: | CADyhKSVOpPyWfRJ-vAwsNzL=Hy_O5aUweWDDgh6k94gXH1jLSQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hello,
A few days before, I got a question as described in the subject line on
a discussion with my colleague.
In general, larger i/o size per system call gives us wider bandwidth on
sequential read, than multiple system calls with smaller i/o size.
Probably, people knows this heuristics.
On the other hand, PostgreSQL always reads database files by BLCKSZ
(= usually, 8KB) when referenced block was not on the shared buffer,
however, it doesn't seem to me it can pull maximum performance of
modern storage system.
I'm not certain whether we had discussed this kind of ideas, or not.
So, I'd like to see the reason why we stick on the fixed length i/o size,
if similar ideas were rejected before.
An idea that I'd like to investigate is, PostgreSQL allocates a set of
continuous buffers to fit larger i/o size when block is referenced due to
sequential scan, then invokes consolidated i/o request on the buffer.
It probably make sense if we can expect upcoming block references
shall be on the neighbor blocks; that is typical sequential read workload.
Of course, we shall need to solve some complicated stuff, like prevention
of fragmentation on shared buffers, or enhancement of internal APIs of
storage manager to accept larger i/o size.
Furthermore, it seems to me this idea has worth to investigate.
Any comments please. Thanks,
--
KaiGai Kohei <kaigai(at)kaigai(dot)gr(dot)jp>
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2013-08-22 20:00:39 | Re: Does larger i/o size make sense? |
Previous Message | Pavel Stehule | 2013-08-22 17:56:20 | Re: PL/pgSQL, RAISE and error context |