From: | Steve Crawford <scrawford(at)pinpointresearch(dot)com> |
---|---|
To: | Rainer Mager <rainer(at)vanten(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: speeding up table creation |
Date: | 2008-10-14 23:56:41 |
Message-ID: | 48F531B9.9010302@pinpointresearch.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Rainer Mager wrote:
>
> I have an interesting performance improvement need. As part of the
> automatic test suite we run in our development environment, we
> re-initialize our test database a number of times in order to ensure
> it is clean before running a test. We currently do this by dropping
> the public schema and then recreating our tables (roughly 30 tables
> total). After that we do normal inserts, etc, but never with very much
> data. My question is, what settings can we tweak to improve
> performance is this scenario? Specifically, if there was a way to tell
> Postgres to keep all operations in memory, that would probably be ideal.
>
What is the test part? In other words, do you start with a known initial
database with all empty tables then run the tests or is part of the test
itself the creation of those tables? How much data is in the initial
database if the tables aren't empty. Creating 30 empty tables should
take a trivial amount of time. Also, are there other schemas than public?
A couple ideas/comments:
You cannot keep the data in memory (that is, you can't disable writing
to the disk). But since you don't care about data loss, you could turn
off fsync in postgresql.conf. From a test perspective you should be fine
- it will only be an issue in the event of a crash and then you can just
restart with a fresh load. Remember, however, that any performance
benchmarks won't translate to production use (of course they don't
translate if you are using ramdisk anyway).
Note that the system tables are updated whenever you add/delete/modify
tables. Make sure they are being vacuumed or your performance will
slowly degrade.
My approach is to create a database exactly as you want it to be at the
start of your tests (fully vacuumed and all) and then use it as a
template to be used to create the testdb each time. Then you can just
(with appropriate connection options) run "dropdb thetestdb" followed by
"createdb --template thetestdbtemplate thetestdb" which is substantially
faster than deleting and recreating tables - especially if they contain
much data.
Cheers,
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2008-10-15 00:00:23 | Re: Drupal and PostgreSQL - performance issues? |
Previous Message | Rainer Mager | 2008-10-14 23:08:27 | speeding up table creation |