From: | Christiaan Willemsen <cwillemsen(at)technocon(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Using high speed swap to improve performance? |
Date: | 2010-04-02 19:15:00 |
Message-ID: | vmime.4bb64234.61d6.751839cb97a3e6d@yoda.dhcp.tecnocon.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi there,
About a year ago we setup a machine with sixteen 15k disk spindles on Solaris using ZFS. Now that Oracle has taken Sun, and is closing up Solaris, we want to move away (we are more familiar with Linux anyway).
So the plan is to move to Linux and put the data on a SAN using iSCSI (two or four network interfaces). This however leaves us with with 16 very nice disks dooing nothing. Sound like a wast of time. If we were to use Solaris, ZFS would have a solution: use it as L2ARC. But there is no Linux filesystem with those features (ZFS on fuse it not really an option).
So I was thinking: Why not make a big fat array using 14 disks (raid 1, 10 or 5), and make this a big and fast swap disk. Latency will be lower than the SAN can provide, and throughput will also be better, and it will relief the SAN from a lot of read iops.
So I could create a 1TB swap disk, and put it onto the OS next to the 64GB of memory. Then I can set Postgres to use more than the RAM size so it will start swapping. It would appear to postgres that the complete database will fit into memory. The question is: will this do any good? And if so: what will happen?
Kind regards,
Christiaan
From | Date | Subject | |
---|---|---|---|
Next Message | Arjen van der Meijden | 2010-04-02 19:53:08 | Re: Using high speed swap to improve performance? |
Previous Message | Joel Jacobson | 2010-04-02 18:19:00 | LIMIT causes planner to do Index Scan using a less optimal index |