From: | Chris Kratz <chris(dot)kratz(at)vistashare(dot)com> |
---|---|
To: | John Arbash Meinel <john(at)arbash-meinel(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performance Tuning |
Date: | 2005-02-09 21:25:34 |
Message-ID: | 200502091625.34787.chris.kratz@vistashare.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wednesday 09 February 2005 03:38 pm, John Arbash Meinel wrote:
>...
> I'm very surprised you are doing RAID 0. You realize that if 1 drive
> goes out, your entire array is toast, right? I would recommend doing
> either RAID 10 (0+1), or even Raid 5 if you don't do a lot of writes.
<grin> Yeah, we know. This is a development server and we drop and reload
databases regularly (sometimes several times a day). In this case we don't
really care about the integrity of the data since it's for our developers to
test code against. Also, the system is on a mirrored set of drives. On our
live servers we have hardware raid 1 at this point for the data drives. When
I/O becomes a bottleneck, we are planning on moving to Raid 10 for the data
and Raid 1 for the transaction log with as many drives as I can twist arms
for. Up to this point it has been easier just to stuff the servers full of
memory and let the OS cache the db in memory. We know that at some point
this will no longer work, but for now it is.
As a side note, I learned something very interesting for our developers here.
We had been doing a drop database and then a reload off a db dump from our
live server for test data. This takes 8-15 minutes depending on the server
(the one above takes about 8 minutes). I learned through testing that I can
use create database template some_other_database and make a duplicate in
about 2.5 minutes. which is a huge gain for us. We can load a pristine copy,
make a duplicate, do our testing on the duplicate, drop the duplicate and
create a new duplicate in less then five mintes.
Cool.
> Probably most important, though is to look at the individual queries and
> see what they are doing.
>
> >Postgresql 7.4.5 installed via RPM running on Linux kernel 2.6.8.1
> >
> >Items changed in the postgresql.conf:
> >
> >tcpip_socket = true
> >max_connections = 32
> >port = 5432
> >shared_buffers = 12288 # min 16, at least max_connections*2, 8KB each
> >sort_mem=16384
> >vacuum_mem = 32768 # min 1024, size in KB
> >max_fsm_pages = 60000 # min max_fsm_relations*16, 6 bytes each
> >max_fsm_relations = 1000 # min 100, ~50 bytes each
> >effective_cache_size = 115200 # typically 8KB each
> >random_page_cost = 1 # units are one sequential page fetch cost
>
> Most of these seem okay to me, but random page cost is *way* too low.
> This should never be tuned below 2. I think this says "an index scan of
> *all* rows is as cheap as a sequential scan of all rows." and that
> should never be true.
You caught me. I actually tweaked that today after finding a page that
suggested doing that if the data was mostly in memory. I have been running
it at 2, and since we didn't notice any improvement, it will be going back to
2.
> What could actually be happening is that you are getting index scans
> when a sequential scan would be faster.
>
> I don't know what you would see, but what does "explain analyze select
> count(*) from blah;" say. If it is an index scan, you have your machine
> mistuned. select count(*) always grabs every row, and this is always
> cheaper with a sequential scan.
>
> John
> =:->
With a random_page_cost set to 1, on a larger table a select count(*) nets
this...
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
Aggregate (cost=9848.12..9848.12 rows=1 width=0) (actual
time=4916.869..4916.872 rows=1 loops=1)
-> Seq Scan on answer (cost=0.00..8561.29 rows=514729 width=0) (actual
time=0.011..2624.202 rows=514729 loops=1)
Total runtime: 4916.942 ms
(3 rows)
Now here is a very curious thing. If I turn on timing and run the count
without explain analyze, I get...
count
--------
514729
(1 row)
Time: 441.539 ms
How odd. Running the explain adds 4.5s to it. Running the explain again goes
back to almost 5s. Now I wonder why that would be different.
Changing random cpu cost back to 2 nets little difference (4991.940ms for
explain and 496ms) But we will leave it at that for now.
--
Chris Kratz
Systems Analyst/Programmer
VistaShare LLC
www.vistashare.com
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2005-02-09 21:50:24 | annotated PostgreSQL.conf now up |
Previous Message | Greg Stark | 2005-02-09 20:59:50 | Re: Performance Tuning |