From: | Kostas Karadamoglou <karadamoglou_k(at)yahoo(dot)gr> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Design problem About application related with cached rows |
Date: | 2005-08-14 09:43:05 |
Message-ID: | ddn3i4$acm$2@sea.gmane.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello,
I try to create an application for my dissertation that caches rows from
an romote database. The application handles a cache database which is
identical with the original. The only difference is that it does not
have autogenerated fields and referential integrity is omitted.
I have designed a caching algorithm specific to the context of my
applocation. However, I have a problem that I cannot solve it:
I want to send a query to the remote database and then store the result
to the cache instance. The cache database might have rows that can be
duplicate with some rows of the resultset retrieved from the query.
The easy solution is to insert all the rows of the resultset one by one
after I check their existence at the cache table. However, this solution
impose network latency to the network because useless data is moved on
the net.
Do you know any efficient way to fetch the exception (the rows that dont
exist at the cache instance) of rows from the remote database using sql
queries?
I tried to use the following kind of query but the database returns an
overflow message if the query string is too long.
SELECT * FROM Customers WHERE CustomerID NOT IN (01,02, 03, ...);
From | Date | Subject | |
---|---|---|---|
Next Message | Kretschmer Andreas | 2005-08-14 11:12:51 | Re: [despammed] Re: [GENERAL] |
Previous Message | Michael Fuhr | 2005-08-14 04:01:09 | Re: pl/pgsql frustration |