From: | Toby Corkindale <toby(dot)corkindale(at)strategicdata(dot)com(dot)au> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: SSDs with Postgresql? |
Date: | 2011-04-29 04:32:43 |
Message-ID: | 4DBA3F6B.1010309@strategicdata.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 22/04/11 01:33, Florian Weimer wrote:
> * Greg Smith:
>
>> The fact that every row update can temporarily use more than 8K means
>> that actual write throughput on the WAL can be shockingly large. The
>> smallest customer I work with regularly has a 50GB database, yet they
>> write 20GB of WAL every day. You can imagine how much WAL is
>> generated daily on systems with terabyte databases.
>
> Interesting. Is there an easy way to monitor WAL traffic in away? It
> does not have to be finegrained, but it might be helpful to know if
> we're doing 10 GB, 100 GB or 1 TB of WAL traffic on a particular
> database, should the question of SSDs ever come up.
One thought I had on monitoring write usage..
If you're on Linux with the ext4 filesystem, then it keeps track of some
statistics for you.
Check out /sys/fs/ext4/$DEV/lifetime_write_kbytes
(where $DEV is the device the fs is mounted on, eg. sda1, or dm-0, or
whatnot - see /dev/mapper to get mappings from LVMs to dm-numbers)
If you log that value every day, you could get an idea of your daily
write load.
-Toby
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2011-04-29 06:35:00 | Re: SSDs with Postgresql? |
Previous Message | Seb | 2011-04-29 03:03:20 | Re: schemas for organizing tables |