From: | Justin Clift <justin(at)postgresql(dot)org> |
---|---|
To: | Jakab Laszlo <jakablaszlo(at)sofasoft(dot)ro> |
Cc: | Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: performance issues for processing more then 150000 |
Date: | 2003-02-21 09:16:18 |
Message-ID: | 3E55EE62.6010909@postgresql.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Jakab Laszlo wrote:
> Hello Chris,
>
> I mean here 150000 inserts/day ( quickly grows constantly )... - with
> transactions and on the same table ... maybe after one significant
> amount of time we can move the records of one year to one archive table ...
> And some 2-3 millions of selects / day ...
That's no problem at all, depending on:
+ How complex are the queries you're intending on running?
+ How will the data be spread between the tables?
+ The amount of data per row also makes a difference, if it is
extremely large.
> I would like to know also some hardware related advice.
You're almost definitely going to be needing a SCSI or better RAID
array, plus a server with quite a few GB's of ECC memory.
If you need to get specific about hardware to the point of knowing
exactly what you're needing, you're likely best to pay a good PostgreSQL
consultant to study your proposal in depth.
Hope this helps.
Regards and best wishes,
Justin Clift
> thanks,
> Jaki
--
"My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there."
- Indira Gandhi
From | Date | Subject | |
---|---|---|---|
Next Message | Jakab Laszlo | 2003-02-21 09:59:24 | Re: performance issues for processing more then 150000 |
Previous Message | Matthias Meixner | 2003-02-21 08:52:48 | Re: Write ahead logging |