From: | "Merlin Moncure" <mmoncure(at)gmail(dot)com> |
---|---|
To: | "Tarhon-Onu Victor" <mituc(at)iasi(dot)rdsnet(dot)ro> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: 500 requests per second |
Date: | 2007-05-21 19:50:27 |
Message-ID: | b42b73150705211250w61f9737ele7c498d8e1517ed3@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 5/12/07, Tarhon-Onu Victor <mituc(at)iasi(dot)rdsnet(dot)ro> wrote:
>
> Hi guys,
>
> I'm looking for a database+hardware solution which should be able
> to handle up to 500 requests per second. The requests will consist in:
> - single row updates in indexed tables (the WHERE clauses will use
> the index(es), the updated column(s) will not be indexed);
> - inserts in the same kind of tables;
> - selects with approximately the same WHERE clause as the update
> statements will use.
> So nothing very special about these requests, only about the
> throughput.
>
> Can anyone give me an idea about the hardware requirements, type
> of
> clustering (at postgres level or OS level), and eventually about the OS
> (ideally should be Linux) which I could use to get something like this in
> place?
I work on a system about like you describe....400tps constant....24/7.
Major challenges are routine maintenance and locking. Autovacuum is
your friend but you will need to schedule a full vaccum once in a
while because of tps wraparound. If you allow AV to do this, none of
your other tables get vacuumed until it completes....heh!
If you lock the wrong table, transactions will accumulate rapidly and
the system will grind to a halt rather quickly (this can be mitigated
somewhat by smart code on the client).
Other general advice:
* reserve plenty of space for WAL and keep volume separate from data
volume...during a long running transaction WAL files will accumulate
rapidly and panic the server if it runs out of space.
* set reasonable statement timeout
* backup with pitr. pg_dump is a headache on extremely busy servers.
* get good i/o system for your box. start with 6 disk raid 10 and go
from there.
* spend some time reading about bgwriter settings, commit_delay, etc.
* keep an eye out for postgresql hot (hopefully coming with 8.3) and
make allowances for it in your design if possible.
* normalize your database and think of vacuum as dangerous enemy.
good luck! :-)
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Y Sidhu | 2007-05-21 20:19:02 | Re: pg_stats how-to? |
Previous Message | Merlin Moncure | 2007-05-21 18:40:10 | Re: Increasing Shared_buffers = slow commits? |