Re: performance problem - 10.000 databases

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Marek Florianczyk <franki(at)tpi(dot)pl>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: performance problem - 10.000 databases
Date: 2003-11-05 17:59:49
Message-ID: 3453.1068055189@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Marek Florianczyk <franki(at)tpi(dot)pl> writes:
> Each client was doing:

> 10 x connect,"select * from table[rand(1-4)] where
> number=[rand(1-1000)]",disconnect--(fetch one row)

Seems like this is testing the cost of connect and disconnect to the
exclusion of nearly all else. PG is not designed to process just one
query per connection --- backend startup is too expensive for that.
Consider using a connection-pooling module if your application wants
short-lived connections.

> I noticed that queries like: "\d table1" "\di" "\dp" are extremly slow,

I thought maybe you'd uncovered a performance issue with lots of
schemas, but I can't reproduce it here. I made 10000 schemas each
containing a table "mytab", which is about the worst case for an
unqualified "\d mytab", but it doesn't seem excessively slow --- maybe
about a quarter second to return the one mytab that's actually in my
search path. In realistic conditions where the users aren't all using
the exact same table names, I don't think there's an issue.

regards, tom lane

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Marek Florianczyk 2003-11-05 18:01:38 Re: performance problem - 10.000 databases
Previous Message Marek Florianczyk 2003-11-05 16:44:28 Re: performance problem - 10.000 databases