From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
Cc: | Doug McNaught <doug(at)mcnaught(dot)org>, Francois Suter <dba(at)paragraf(dot)ch>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Urgent: 10K or more connections |
Date: | 2003-07-18 21:28:58 |
Message-ID: | 4347.1058563738@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
"scott.marlowe" <scott(dot)marlowe(at)ihs(dot)com> writes:
> But I'm sure that with a few tweaks to the code here and there it's
> doable, just don't expect it to work "out of the box".
I think you'd be sticking your neck out to assume that 10k concurrent
connections would perform well, even after tweaking. I'd worry first
about whether the OS can handle 10k processes (which among other things
would probably require order-of-300k open file descriptors...). Maybe
Solaris is built to do that but the Unixen I've dealt with would go
belly up. After that you'd have to look at Postgres' internal issues
--- contention on access to the PROC array would probably become a
significant factor, for example, and we'd have to do some redesign to
avoid linear scans of the PROC array where possible.
I don't doubt that we could support 10k concurrent *users*, given
connection pooling of some kind. I'm dubious about 10k concurrent
database sessions though.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-07-18 21:36:42 | Re: FATAL 2: open of /var/lib/pgsql/data/pg_clog/0EE3 |
Previous Message | Sean Chittenden | 2003-07-18 20:28:26 | Re: Urgent: 10K or more connections |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-07-18 21:47:37 | Re: commiters log |
Previous Message | Sean Chittenden | 2003-07-18 20:28:26 | Re: Urgent: 10K or more connections |