Re: performance problem - 10.000 databases

From: Marek Florianczyk <franki(at)tpi(dot)pl>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: performance problem - 10.000 databases
Date: 2003-11-05 18:30:55
Message-ID: 1068057055.28827.180.camel@franki-laptop.tpi.pl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

W liście z śro, 05-11-2003, godz. 19:34, Tom Lane pisze:
> Marek Florianczyk <franki(at)tpi(dot)pl> writes:
> > But did you do that under some database load ? eg. 100 clients
> > connected, like in my example ? When I do these queries "\d" without any
> > clients connected and after ANALYZE it's fast, but only 100 clients is
> > enough to lengthen query time to 30 sec. :(
>
> Then it's not \d's fault --- you simply don't have enough horsepower to
> support 100 concurrent clients, regardless of what specific query you're
> testing.
>
> You might find that not reconnecting so often would improve matters;
> I'm sure that a lot of your cycles are being taken by backend startup.

Maybe reconnect is to often, but how to explain that reular queries like
select * from table1 ale much faster than \d's ? ( my post to Jeff )

Marek

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Tom Lane 2003-11-05 18:34:22 Re: performance problem - 10.000 databases
Previous Message Jeff 2003-11-05 18:23:49 Re: performance problem - 10.000 databases