| From: | Max <maxabbr(at)yahoo(dot)com(dot)br> |
|---|---|
| To: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
| Subject: | One huge db vs many small dbs |
| Date: | 2013-12-05 10:42:10 |
| Message-ID: | 1386240130.78000.YahooMailNeo@web163002.mail.bf1.yahoo.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Hello,
We are starting a new project to deploy a solution in cloud with the possibility to be used for 2.000+ clients. Each of this clients will use several tables to store their information (our model has about 500+ tables but there's less than 100 core table with heavy use). Also the projected ammout of information per client could be from small (few hundreds tuples/MB) to huge (few millions tuples/GB).
One of the many questions we have is about performance of the db if we work with only one (using a ClientID to separete de clients info) or thousands of separate dbs. The management of the dbs is not a huge concert as we have an automated tool.
At Google there's lots of cases about this subject but none have a scenario that matchs with the one I presented above, so I would like to know if anyone here has a similar situation or knowledgement and could share some thoughts.
Thanks
Max
| From | Date | Subject | |
|---|---|---|---|
| Next Message | salah jubeh | 2013-12-05 14:09:00 | Explain analyze time overhead |
| Previous Message | Andres Freund | 2013-12-05 09:42:26 | Re: Parallel Select query performance and shared buffers |