Re: performance problem - 10.000 databases

From: Marek Florianczyk <franki(at)tpi(dot)pl>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: performance problem - 10.000 databases
Date: 2003-11-05 18:19:39
Message-ID: 1068056378.28827.172.camel@franki-laptop.tpi.pl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

W liście z śro, 05-11-2003, godz. 18:59, Tom Lane pisze:
> Marek Florianczyk <franki(at)tpi(dot)pl> writes:
> > Each client was doing:
>
> > 10 x connect,"select * from table[rand(1-4)] where
> > number=[rand(1-1000)]",disconnect--(fetch one row)
>
> Seems like this is testing the cost of connect and disconnect to the
> exclusion of nearly all else. PG is not designed to process just one
> query per connection --- backend startup is too expensive for that.
> Consider using a connection-pooling module if your application wants
> short-lived connections.

You right, maybe typical php page will have more queries "per view"
How good is connection-pooling module when connection from each virtual
site is uniq? Different user and password, and differen schemas and
permissions, so this connect-pooling module would have to switch between
users, without reconnecting to database? Impossible ?

>
> > I noticed that queries like: "\d table1" "\di" "\dp" are extremly slow,
>
> I thought maybe you'd uncovered a performance issue with lots of
> schemas, but I can't reproduce it here. I made 10000 schemas each
> containing a table "mytab", which is about the worst case for an
> unqualified "\d mytab", but it doesn't seem excessively slow --- maybe
> about a quarter second to return the one mytab that's actually in my
> search path. In realistic conditions where the users aren't all using
> the exact same table names, I don't think there's an issue.

But did you do that under some database load ? eg. 100 clients
connected, like in my example ? When I do these queries "\d" without any
clients connected and after ANALYZE it's fast, but only 100 clients is
enough to lengthen query time to 30 sec. :(

I've 3000 schemas named: test[1-3000] and 3000 users named test[1-3000]
in each schema there is four tables (table1 table2 table3 table4 )
each table has 3 column (int,text,int) and some of them has also
indexes.

If you want, I will send perl script that forks to 100 process and
perform my queries.

greetings
Marek

>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
>

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Marek Florianczyk 2003-11-05 18:23:36 Re: performance problem - 10.000 databases
Previous Message Marek Florianczyk 2003-11-05 18:01:38 Re: performance problem - 10.000 databases