From: | Kevin Kempter <kevink(at)consistentstate(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: moving data between tables causes the db to overwhelm the system |
Date: | 2009-09-01 09:32:32 |
Message-ID: | 200909010332.32431.kevink@consistentstate.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tuesday 01 September 2009 03:26:08 Pierre Frédéric Caillaud wrote:
> > We have a table that's > 2billion rows big and growing fast. We've setup
> > monthly partitions for it. Upon running the first of many select * from
> > bigTable insert into partition statements (330million rows per month) the
> > entire box eventually goes out to lunch.
> >
> > Any thoughts/suggestions?
> >
> > Thanks in advance
>
> Did you create the indexes on the partition before or after inserting the
> 330M rows into it ?
> What is your hardware config, where is xlog ?
Indexes are on the partitions, my bad. Also HW is a Dell server with 2 quad
cores and 32G of ram
we have a DELL MD3000 disk array with an MD1000 expansion bay, 2 controllers,
2 hbs's/mount points runing RAID 10
The explain plan looks like this:
explain SELECT * from bigTable
where
"time" >= extract ('epoch' from timestamp '2009-08-31 00:00:00')::int4
and "time" <= extract ('epoch' from timestamp '2009-08-31 23:59:59')::int
;
QUERY PLAN
------------------------------------------------------------------------------------------------
Index Scan using bigTable_time_index on bigTable (cost=0.00..184.04 rows=1
width=129)
Index Cond: (("time" >= 1251676800) AND ("time" <= 1251763199))
(2 rows)
From | Date | Subject | |
---|---|---|---|
Next Message | tv | 2009-09-01 09:54:27 | Re: moving data between tables causes the db to overwhelm the system |
Previous Message | Pierre Frédéric Caillaud | 2009-09-01 09:26:08 | Re: moving data between tables causes the db to overwhelm the system |