From: | "Merlin Moncure" <merlin(dot)moncure(at)rcsonline(dot)com> |
---|---|
To: | "Michael Riess" <mlriess(at)gmx(dot)de> |
Cc: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: 15,000 tables |
Date: | 2005-12-01 20:28:58 |
Message-ID: | 6EE64EF3AB31D5448D0007DD34EEB3417DD9DE@Herge.rcsinc.local |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> we are currently running a postgres server (upgraded to 8.1) which has
> one large database with approx. 15,000 tables. Unfortunately
performance
> suffers from that, because the internal tables (especially that which
> holds the attribute info) get too large.
>
> (We NEED that many tables, please don't recommend to reduce them)
>
> Logically these tables could be grouped into 500 databases. My
question
> is:
>
> Would performance be better if I had 500 databases (on one postgres
> server instance) which each contain 30 tables, or is it better to have
> one large database with 15,000 tables? In the old days of postgres 6.5
> we tried that, but performance was horrible with many databases ...
>
> BTW: I searched the mailing list, but found nothing on the subject -
and
> there also isn't any information in the documentation about the
effects
> of the number of databases, tables or attributes on the performance.
>
> Now, what do you say? Thanks in advance for any comment!
I've never run near that many databases on one box so I can't comment on
the performance. But let's assume for the moment pg runs fine with 500
databases. The most important advantage of multi-schema approach is
cross schema querying. I think as you are defining your problem this is
a better way to do things.
Merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2005-12-01 21:15:59 | Re: 15,000 tables |
Previous Message | Jaime Casanova | 2005-12-01 19:40:59 | Re: 15,000 tables |