From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Marek Florianczyk <franki(at)tpi(dot)pl> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: performance problem - 10.000 databases |
Date: | 2003-10-31 14:23:39 |
Message-ID: | 9643.1067610219@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Marek Florianczyk <franki(at)tpi(dot)pl> writes:
> We are building hosting with apache + php ( our own mod_virtual module )
> with about 10.000 wirtul domains + PostgreSQL.
> PostgreSQL is on a different machine ( 2 x intel xeon 2.4GHz 1GB RAM
> scsi raid 1+0 )
> I've made some test's - 3000 databases and 400 clients connected at same
> time.
You are going to need much more serious iron than that if you want to
support 10000 active databases. The required working set per database
is a couple hundred K just for system catalogs (I don't have an exact
figure in my head, but it's surely of that order of magnitude). So the
system catalogs alone would require 2 gig of RAM to keep 'em swapped in;
never mind caching any user data.
The recommended way to handle this is to use *one* database and create
10000 users each with his own schema. That should scale a lot better.
Also, with a large max_connections setting, you have to beware that your
kernel settings are adequate --- particularly the open-files table.
It's pretty easy for Postgres to eat all your open files slots. PG
itself will usually survive this condition just fine, but everything
else you run on the machine will start falling over :-(. For safety
you should make sure that max_connections * max_files_per_process is
comfortably less than the size of the kernel's open-files table.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Matt Clark | 2003-10-31 14:30:57 | Re: performance problem - 10.000 databases |
Previous Message | Marek Florianczyk | 2003-10-31 13:20:09 | Re: performance problem - 10.000 databases |