From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Isabelle Therrien <therriei(at)LUB(dot)UMontreal(dot)CA> |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: important decrease of performance using the BETA version in one particular case |
Date: | 2001-03-19 23:44:43 |
Message-ID: | 8052.985045483@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Isabelle Therrien <therriei(at)LUB(dot)UMontreal(dot)CA> writes:
> I have a big query, reported below, that is called several times in my
> application.
> At least 4 active connections call it at the same time.
> Normally, this query is executed in about 30-50 milliseconds.
> But after a while (depending on how many connections are used, and how
> often the query is called),
> the query is executed in 1000ms, then 2000ms, and it continues to grow
> exponentially. I've already seen it reaching 80 seconds.
Hmm, that's odd. What causes the time to drop back down to milliseconds
--- do you have to restart the whole database, or just run it in a new
backend? Does the amount of memory being used by the backend increase
as the time goes up? What does EXPLAIN show as the query plan for the
query? How large are the tables, and how many tuples are actually
retrieved?
Also, which beta release exactly, and how did you build it (what
configure options)?
Finally, it would be nice to see the full schemas for these tables, to
be sure we're not missing something. You can generate those via
pg_dump -s -t tablename databasename
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | pgsql-bugs | 2001-03-20 05:15:28 | comments on columns aren't displayed by \dd |
Previous Message | Isabelle Therrien | 2001-03-19 23:30:13 | important decrease of performance using the BETA version in one particular case |