| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Alex Avriette <a_avriette(at)acs(dot)org> |
| Cc: | "'pgsql-general(at)postgresql(dot)org'" <pgsql-general(at)postgresql(dot)org> |
| Subject: | Re: Suitability of postgres for very high transaction volume |
| Date: | 2001-12-11 00:39:41 |
| Message-ID: | 6895.1008031181@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Alex Avriette <a_avriette(at)acs(dot)org> writes:
> I'm intending to use postgres as a new backend for a server I am running.
> The throughput is roughly 8gb per day over 10,000 concurrent
> connections.
You will need to find a way of pooling those connections; I doubt you
really want to have 10000 backend processes running at once, do you?
> ... I'm using perl's POE, so there could conceivably be
> several dozen to even a hundred or more concurrent queries.
A hundred or so concurrent operations seems perfectly reasonable, given
that you're using some serious iron. But I think you want a hundred
active backends, not a hundred active ones and 9900 idle ones.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2001-12-11 00:42:09 | Re: Perf number on a project |
| Previous Message | Paul Laub | 2001-12-11 00:38:37 | Re: referential integrity on existing table |