From: | Marek Florianczyk <franki(at)tpi(dot)pl> |
---|---|
To: | Bruno Wolff III <bruno(at)wolff(dot)to> |
Cc: | Jeff <threshar(at)torgo(dot)978(dot)org>, pgsql-admin(at)postgresql(dot)org |
Subject: | Re: performance problem - 10.000 databases |
Date: | 2003-11-05 18:01:38 |
Message-ID: | 1068055298.28821.155.camel@franki-laptop.tpi.pl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
W liście z śro, 05-11-2003, godz. 17:18, Bruno Wolff III pisze:
> On Wed, Nov 05, 2003 at 16:14:59 +0100,
> Marek Florianczyk <franki(at)tpi(dot)pl> wrote:
> > One database with 3.000 schemas works better than 3.000 databases, but
> > there is REAL, BIG problem, and I won't be able to use this solution:
> > Every query, like "\d table" "\di" takes veeeeeeery long time.
> > Users have to have phpPgAdmin wich I modified to suit our needs, but now
> > it doesn't work, not even log-in. If I rewrite phpPgAdmin to log users
> > without checking all schemas, and tables within schemas, none of users
> > will be able to examine structure of table. Query like "\d table" from
> > psql monitor takes about 2-5 MINUTES :(
>
> Analyzing the system tables will likely make these queries go faster.
I've made:
VACUUM FULL;
ANALYZE;
and it works better, but no revelation, when I do "\d schemaname.table"
it's better. I've to still wait about 10-30 sec. and now it's only 100
clients connected. :(
Marek
From | Date | Subject | |
---|---|---|---|
Next Message | Marek Florianczyk | 2003-11-05 18:19:39 | Re: performance problem - 10.000 databases |
Previous Message | Tom Lane | 2003-11-05 17:59:49 | Re: performance problem - 10.000 databases |