Slow access to remote database

From: "Rob Richardson" <Rob(dot)Richardson(at)rad-con(dot)com>
To: <pgsql-general(at)postgresql(dot)org>
Cc: "Ward Eaton" <Ward(dot)Eaton(at)rad-con(dot)com>
Subject: Slow access to remote database
Date: 2008-06-04 19:32:42
Message-ID: 04A6DB42D2BA534FAC77B90562A6A03D94324D@server.rad-con.local
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Greetings!

A customer has two servers, one hosting a PostGRES database and our main
application, and the other, the model server, hosting a couple of other
applications that run rarely but require a few minutes and most of the
processor when they do run. The model server is a leased machine, and
the lease is up. They have a new machine to replace it. I copied our
applications to the new model server machine, set up the DSN to point to
the database server as the old model server machine's DSN was set up,
and tested the model applications. Both of them run at least ten times
slower than the did on the old model server.

The problem appears to be that large tables are being read from the
database. One has something on the order of 50 columns, and there are
81,207 rows in the table. As one of the models is running, I can see in
TaskManager that allocated memory is growing by about 100-200
Kbytes/sec. When I run the model on my machine which also has a copy of
the database, allocated memory grows by about 15 Mbytes/sec.

I have replicated the problem on my machine, with the applicaton running
on my computer and the database hosted on a lab computer.

I am actually only using three rows of the 81,000 in the large table.
(People who originally wrote this program had no concept of efficient
uses of databases.) I tried executing the same query on the same remote
database in PGAdmin. I got my three rows in about a third of a second.
When I tried opening the entire table in PGAdmin, I got the same
performance as I did in my application. I could see PGAdmin.exe's
allocated memory going up by 100-200 Kbytes/sec.

The model application is written in Microsoft Visual C++ V6.0. Data
access is through a DSN. The code uses Microsoft ADODB library objects.

We have a library that wraps the ADODB objects. That wrapper library
will combine a table name, a list of fields, a filter clause and a sort
clause into a single SQL SELECT statement. By the time the query is
sent to the database, it looks something like:

"SELECT field1, field2, ... , field50 FROM inventory WHERE coil_id =
'Coil1' or coil_id = 'Coil2' or coil_id = 'Coil3'"

But ADO recordset objects have their own Filter properties, which we're
not using. I added a function to the wrapper library named
"OpenFilteredTable", in which I set the underlying ADORecordset object's
Filter property and then open the table. It seemed to be a reasonable
thing to do, but unfortunately it doesn't work. I set the ADORecordset
object's Filter property to "coil_id = 'XXXXX'", which should not return
any rows, and then I open the inventory table. Memory still crawls up,
and the query takes forever. What can I do in a VC++ 6.0 application to
get the speed that PGAdmin displayes when getting filtered data from a
table?

Thank you very much!

RobR

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Ralph Smith 2008-06-04 20:57:27 Script errors on run
Previous Message Stephan Szabo 2008-06-04 18:44:10 Re: Exception handling