From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Steve Atkins <steve(at)blighty(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Cc: | "Wes Vaske (wvaske)" <wvaske(at)micron(dot)com> |
Subject: | Re: Fastest Backup & Restore for perf testing |
Date: | 2015-05-28 17:40:15 |
Message-ID: | 556752FF.3070402@BlueTreble.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 5/27/15 3:39 PM, Steve Atkins wrote:
>
>> On May 27, 2015, at 1:24 PM, Wes Vaske (wvaske) <wvaske(at)micron(dot)com> wrote:
>>
>> Hi,
>>
>> I’m running performance tests against a PostgreSQL database (9.4) with various hardware configurations and a couple different benchmarks (TPC-C & TPC-H).
>>
>> I’m currently using pg_dump and pg_restore to refresh my dataset between runs but this process seems slower than it could be.
>>
>> Is it possible to do a tar/untar of the entire /var/lib/pgsql tree as a backup & restore method?
>>
>> If not, is there another way to restore a dataset more quickly? The database is dedicated to the test dataset so trashing & rebuilding the entire application/OS/anything is no issue for me—there’s no data for me to lose.
>>
>
> Dropping the database and recreating it from a template database with "create database foo template foo_template" is about as fast as a file copy, much faster than pg_restore tends to be.
Another possibility is filesystem snapshots, which could be even faster
than createdb --template.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Nasby | 2015-05-28 18:05:34 | Re: Partitioning and performance |
Previous Message | Ravi Krishna | 2015-05-28 14:41:25 | Re: Partitioning and performance |