New server optimization advice

From: Steve Crawford <scrawford(at)pinpointresearch(dot)com>
To: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: New server optimization advice
Date: 2015-01-09 19:26:13
Message-ID: 54B02B55.2050803@pinpointresearch.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I will soon be migrating to some recently acquired hardware and seek
input from those who have gone before.

A quick overview: the dataset size is ~100GB, (~250-million tuples) with
a workload that consists of about 2/3 writes, mostly single record
inserts into various indexed tables, and 1/3 reads. Queries per second
peak around 2,000 and our application typically demands fast response -
for many of these queries the timeout is set to 2-seconds and the
application moves forward and recovers later if that is exceeded.

Although by count they are minimal, every hour there are dozens both of
import and of analysis queries involving multiple tables and tens of
thousands of records. These queries may take up to a few minutes on our
current hardware.

Old hardware is 4-core, 24GB RAM, battery-backed RAID-10 with four 15k
drives.

New hardware is quite different. 2x10-core E5-2660v3 @2.6GHz, 128GB
DDR4-2133 RAM and 800GB Intel DC P3700 NVMe PCIe SSD. In essence, the
dataset will fit in RAM and will be backed by exceedingly fast storage.

This new machine is very different than any we've had before so any
current thinking on optimization would be appreciated. Do I leave
indexes as is and evaluate which ones to drop later? Any recommendations
on distribution and/or kernels (and kernel tuning)? PostgreSQL tuning
starting points? Whatever comes to mind.

Thanks,
Steve

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Claudio Freire 2015-01-09 19:48:11 Re: New server optimization advice
Previous Message Alessandro Ipe 2015-01-06 17:40:05 Re: Excessive memory used for INSERT