Re: Measuring database IO for AWS RDS costings

From: Guillaume Lelarge <guillaume(at)lelarge(dot)info>
To: David Osborne <david(at)qcode(dot)co(dot)uk>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Measuring database IO for AWS RDS costings
Date: 2014-08-13 18:50:13
Message-ID: CAECtzeUA5Gwk9km+6Erm3RD0kSdmtsWv8-2H-b3L3dFPprHJXA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Le 13 août 2014 15:47, "David Osborne" <david(at)qcode(dot)co(dot)uk> a écrit :
>
>
> We have a test Postgresql AWS RDS instance running with a view to
transferring our Live physical Postgresql workload to AWS.
>
> Apart from the cost of the instance, AWS also bill for IO requests per
month.
>
> We are trying to work out how to estimate the IO costs our Live workload
would attract.
> So if we can confirm metrics x+y measured from within our test Postgresql
instance on RDS maps to z billable IO requests, then we can measure the
same metrics from our Live Postgresql server and estimate costs.
>
> I believe in the AWS world an IO request is each 16kb read or written to
disk.
> How would I go about measuring 16kb blocks read or written to disk from
within Postgresql?
>
> I was hopeful of pg_stat_database which has blks_read (which I believe
are 8kb blocks), but there doesn't seem to be an equivalent for
blks_written?
>

You're right that they are 8kB blocks (by default). But it's not read from
disk, is more "postgresql asks the OS to give it the blocks". They may come
from the disk, but they also may come from the OS disk cache. You can't
find actual disk reads and writes from PostgreSQL.

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Mike Sullivan 2014-08-13 21:24:14 Looking for PostgreSQL reporting tool
Previous Message David Osborne 2014-08-13 13:45:52 Measuring database IO for AWS RDS costings