From: | Poul Møller Hansen <freebsd(at)pbnet(dot)dk> |
---|---|
To: | Richard Huxton <dev(at)archonet(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Setting up a database for 10000 concurrent users |
Date: | 2005-09-05 21:01:26 |
Message-ID: | 431CB226.6030008@pbnet.dk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>
> I think you're being horribly optimistic if you actually want 10000
> concurrent connections, with users all doing things. Even if you only
> allow 1MB for each connection that's 10GB of RAM you'd want. Plus a big
> chunk more to actually cache your database files and do work in. Then,
> if you had 10,000 concurrent queries you'd probably want a mainframe to
> handle all the concurrency, or perhaps a 64-CPU box would suffice...
>
> You probably want to investigate connection pooling, but if you say what
> you want to achieve then people will be able to suggest the best approach.
>
I know I'm on thin ice :)
Actually it was a max limit, I want to test how far I can tweak the server.
The clients are doing almost nothing most of the time, maybe one insert
every 2 minutes. Of course that is still more than 80 inserts per second.
I'm connecting the database via JDBC where connection pooling is
possible and also considered.
I haven't been able to find how much memory I can expect the client to
consume, so I thought testing was more accurate than calculating.
Is it really necessary with 1MB RAM for one connection ?
Poul
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Morgan | 2005-09-05 21:17:41 | Re: Debug plpgSQL stored procedures |
Previous Message | Richard Huxton | 2005-09-05 20:59:33 | Re: Debug plpgSQL stored procedures |