From: | "Mike Biamonte" <mike(at)dbeat(dot)com> |
---|---|
To: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Huge Data sets, simple queries |
Date: | 2006-01-28 01:23:55 |
Message-ID: | 007101c623a9$82207c10$0200a8c0@videobox |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Does anyone have any experience with extremely large data sets?
I'm mean hundreds of millions of rows.
The queries I need to run on my 200 million transactions are relatively
simple:
select month, count(distinct(cardnum)) count(*), sum(amount) from
transactions group by month;
This query took 18 hours on PG 8.1 on a Dual Xeon, RHEL3, (2.4 Kernel) with
RAID-10 (15K drives)
and 12 GB Ram. I was expecting it to take about 4 hours - based on some
experience with a
similar dataset on a different machine (RH9, PG7.3 Dual Xeon, 4GB RAM,
Raid-5 10K drives)
This machine is COMPLETELY devoted to running these relatively simple
queries one at a
time. (No multi-user support needed!) I've been tooling with the various
performance settings:
effective_cache at 5GB, shared_buffers at 2 GB, workmem, sortmem at 1 GB
each.
( Shared buffers puzzles me a it bit - my instinct says to set it as high as
possible,
but everything I read says that "too high" can hurt performance.)
Any ideas for performance tweaking in this kind of application would be
greatly appreciated.
We've got indexes on the fields being grouped, and always vacuum analzye
after building them.
It's difficult to just "try" various ideas because each attempt takes a
full day to test. Real
experience is needed here!
Thanks much,
Mike
From | Date | Subject | |
---|---|---|---|
Next Message | Luke Lonergan | 2006-01-28 03:05:04 | Re: Huge Data sets, simple queries |
Previous Message | Ron | 2006-01-27 07:52:45 | Re: [GENERAL] Creation of tsearch2 index is very |