From: | Mike Nolan <nolan(at)gw(dot)tssi(dot)com> |
---|---|
To: | listas(at)miti(dot)com(dot)br (Kilmer C(dot) de Souza) |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Postgre and Web Request |
Date: | 2004-04-28 19:54:16 |
Message-ID: | 200404281954.i3SJsGV8015685@gw.tssi.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> I make a mistake ... there are 10.000 users and 1.000 from 10.000 try to
> access at the same time the database.
I have problems with your numbers. Even if you have 10,000 users who
are ALL online at the same time, in any reasonable period of time (say
60 seconds), how many of them would initiate a request?
In most online applications, 95% OR MORE of all time is spent waiting
for the user to do something. Web-based applications seem to fit that
rule fairly well, because nothing happens at the server end for any
given user until a 'submit' button is pressed.
Consider, for example, a simple name-and-address entry form. A really
fast typist can probably fill out 60-70 of them in an hour. That
means each user is submitting a request every 50-60 seconds. Thus
if there were 10,000 users doing this FULL TIME, they would generate
something under 200 requests/second.
In practice, I wouldn't expect to see more than 50-75 requests/second,
and it shouldn't be too hard to design a hardware configuration capable
of supporting that, disk speed and memory size are likely to be the
major bottleneck points.
I don't know if anyone has ever set up a queuing theory model for a
PostgreSQL+Apache environment, there are probably too many individual
tuning factors (not to mention application specific factors) to make
a generalizable model practical.
--
Mike Nolan
From | Date | Subject | |
---|---|---|---|
Next Message | Dann Corbit | 2004-04-28 21:02:57 | Re: Arbitrary precision modulo operation |
Previous Message | Kilmer C. de Souza | 2004-04-28 19:19:01 | Re: Postgre and Web Request |