From: | The Hermit Hacker <scrappy(at)hub(dot)org> |
---|---|
To: | Chris Bitmead <chris(at)bitmead(dot)com> |
Cc: | "pgsql-hackers(at)postgreSQL(dot)org" <pgsql-hackers(at)postgresql(dot)org>, pgsql-hackers-oo(at)postgresql(dot)org |
Subject: | Re: Proposed new libpq API |
Date: | 2000-07-05 14:41:43 |
Message-ID: | Pine.BSF.4.21.0007051140030.33627-100000@thelab.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 6 Jul 2000, Chris Bitmead wrote:
> The Hermit Hacker wrote:
>
> > What is the PQflush() for here? I took it to mean that it was required,
> > but then reading further down, it just sounds like it flushs what's
> > already been used and would be optional?
> >
> > Doesn't this just do what CURSORs already do then? Run the query, fetch
> > what you need, etc?
>
> There is similarity to cursors, but there is no need to go to the
> trouble of using a cursor to get a lot of the benefits which is that you
> don't have to slurp it all into memory at once. I believe this is how
> most DBMS interfaces work, like MySQL, you can only fetch the next
> record, you can't get random access to the whole result set. This means
> memory usage is very small. Postgres memory usage will be huge. It
> shouldn't be necessary to resort to cursors to scale.
>
> So what PQflush is proposed to do is limit the amount that is cached. It
> discards earlier results. If you flush after every sequential access
> then you only have to use enough memory for a single record. If you use
> PQflush you no longer have random access to earlier results.
>
> Other designs are possible, like some interface for getting the next
> record one at a time and examining it. The idea of this proposal is to
> make the current random access interface and a streaming interface very
> interoperable and be able to mix and match them together. You can take a
> current postgres app, and provided it doesn't actually rely on random
> access, which I would hazard to say most don't, and just by adding the
> one line of code PQflush greatly reduce memory consumption. Or you can
> mix and match and see a sliding window of the most recent X tuples. Or
> you can just ignore this and use the current features.
Okay, just playing devil's advocate here, that's all ... not against the
changes, just want to make sure that all bases are covered ...
One last comment .. when you say 'random access', are you saying that I
can't do a 'PQexec()' to get the results for a SELECT, use a for loop to
go through those results, and then start from i=0 to go through that loop
again without having to do a new SELECT on it?
>
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy(at)hub(dot)org secondary: scrappy(at){freebsd|postgresql}.org
From | Date | Subject | |
---|---|---|---|
Next Message | Benjamin Adida | 2000-07-05 14:48:01 | Re: Article on MySQL vs. Postgres |
Previous Message | Trond Eivind=?iso-8859-1?q?_Glomsr=F8d?= | 2000-07-05 14:30:42 | Re: [HACKERS] Re: Revised Copyright: is this morepalatable? |