From: | Doug McNaught <doug(at)wireboard(dot)com> |
---|---|
To: | Alejandro Fernandez <ale(at)e-group(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Connections per second? |
Date: | 2002-04-23 16:16:59 |
Message-ID: | m3662i1jb8.fsf@varsoon.wireboard.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Alejandro Fernandez <ale(at)e-group(dot)org> writes:
> Hi,
>
> I'm writing a small but must-be-fast cgi program that for each hit
> it gets, it reads an indexed table in a postgres database and writes
> a log to a file based on the result. Any idea how many hits a second
> I can get to before things start crashing, or queuing up too much,
> etc? And will postgres be one of the first to fall? Do any of you
> think it can handle 2000 hits a second (what I think I could get at
> peak times) - and what would it need to do so? Persistent
> connections? Are there any examples or old threads on writing a
> similar program in C with libpq?
Doing it as CGI is going to have two big performance penalties:
1) Kernel and system overhead for starting of a new process per hit,
plus interpreter startup if you're using a scripting language
2) Overhead in Postgres for creating a database connection from scratch
Doing it in C will only eliminate the interpreter startup.
You really want a non-CGI solution (such as mod_perl) and you really
want persistent connections (Apache::DBI is one solution that works
with mod_perl). Java servlets with a connection pooling library would
also work.
-Doug
From | Date | Subject | |
---|---|---|---|
Next Message | Tim Barnard | 2002-04-23 16:17:58 | Re: help! |
Previous Message | Trond Endrestøl | 2002-04-23 16:16:24 | Re: help! |