From: | Matt Chambers <chambers(at)imageworks(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | db performance/design question |
Date: | 2007-09-12 20:33:25 |
Message-ID: | 46E84D15.8020600@imageworks.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
I'm designing a system that will be doing over a million inserts/deletes
on a single table every hour. Rather than using a single table, it is
possible for me to partition the data into multiple tables if I wanted
to, which would be nice because I can just truncate them when I don't
need them. I could even use table spaces to split the IO load over
multiple filers. The application does not require all this data be in
the same table. The data is fairly temporary, it might last 5 seconds,
it might last 2 days, but it will all be deleted eventually and
different data will be created.
Considering a single table would grow to 10mil+ rows at max, and this
machine will sustain about 25mbps of insert/update/delete traffic 24/7 -
365, will I be saving much by partitioning data like that?
--
-Matt
<http://twiki.spimageworks.com/twiki/bin/view/Software/CueDevelopment>
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2007-09-12 21:58:34 | Re: db performance/design question |
Previous Message | Erik Jones | 2007-09-12 20:01:12 | Re: [Again] Postgres performance problem |