From: | Scott Marlowe <smarlowe(at)g2switchworks(dot)com> |
---|---|
To: | Luke Lonergan <llonergan(at)greenplum(dot)com> |
Cc: | William Yu <wyu(at)talisys(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Hardware/OS recommendations for large databases ( |
Date: | 2005-11-16 17:54:34 |
Message-ID: | 1132163674.3582.81.camel@state.g2switchworks.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, 2005-11-16 at 11:47, Luke Lonergan wrote:
> Scott,
Some cutting for clarity... I agree on the OLTP versus OLAP
discussion.
> Here are the facts so far:
> * Postgres can only use 1 CPU on each query
> * Postgres I/O for sequential scan is CPU limited to 110-120
> MB/s on the fastest modern CPUs
> * Postgres disk-based sort speed is 1/10 or more slower than
> commercial databases and memory doesn’t improve it (much)
But PostgreSQL only spills to disk if the data set won't fit into the
amount of memory allocated by working_mem / sort_mem. And for most
Business analysis stuff, this can be quite large, and you can even crank
it up for a single query.
I've written reports that were horrifically slow, hitting the disk and
all, and I upped sort_mem to hundreds of megabytes until it fit into
memory, and suddenly, a slow query is running factors faster than
before.
Or did you mean something else by "disk base sort speed"???
From | Date | Subject | |
---|---|---|---|
Next Message | Matthew Nuzum | 2005-11-16 18:51:25 | Re: Hardware/OS recommendations for large databases ( |
Previous Message | Luke Lonergan | 2005-11-16 17:49:28 | Re: Hardware/OS recommendations for large databases ( |