From: | bricklen <bricklen(at)gmail(dot)com> |
---|---|
To: | Max <maxabbr(at)yahoo(dot)com(dot)br> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: One huge db vs many small dbs |
Date: | 2013-12-05 14:28:41 |
Message-ID: | CAGrpgQ8f1aoPPG3BXsdGhR1JJMawF6WUrKtJz=tkTq-PvRv8Aw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Dec 5, 2013 at 2:42 AM, Max <maxabbr(at)yahoo(dot)com(dot)br> wrote:
> We are starting a new project to deploy a solution in cloud with the
> possibility to be used for 2.000+ clients. Each of this clients will use
> several tables to store their information (our model has about 500+ tables
> but there's less than 100 core table with heavy use). Also the projected
> ammout of information per client could be from small (few hundreds tuples/MB)
> to huge (few millions tuples/GB).
>
> One of the many questions we have is about performance of the db if we
> work with only one (using a ClientID to separete de clients info) or
> thousands of separate dbs. The management of the dbs is not a huge
> concert as we have an automated tool.
>
More details would be helpful, some of which could include:
how much memory is dedicated to Postgresql,
how many servers,
are you using replication/hot standby,
what are you data access patterns like (mostly inserts/lots of concurrent
queries, a handful of users versus hundreds querying at the same time),
what are your plans for backups,
what are you planning to do to archive older data?
Also, have you considered separate schemas rather than separate databases?
From | Date | Subject | |
---|---|---|---|
Next Message | salah jubeh | 2013-12-05 14:43:47 | Re: Explain analyze time overhead |
Previous Message | Tom Lane | 2013-12-05 14:22:14 | Re: Explain analyze time overhead |