From: | "Stephen R(dot) van den Berg" <srb(at)cuci(dot)nl> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Processing database query-results piecemeal |
Date: | 2008-06-30 11:17:42 |
Message-ID: | 20080630111742.GA19746@cuci.nl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I'm looking at the most efficient and lean way to interface with the
DB in a least-overhead scenario to process large(r) amounts of binary
data.
For simplicity, I want to avoid using the Large-Object facility.
It seems that the most efficient way to communicate with the DB would
be through PQexecParams(), which avoids the whole bytea-encoding issues.
However, two questions spring to mind:
- The docs say that you can use $1, $2, etc. to reference parameters.
What happens if you have more than 9 parameters?
Does it become $10 or ${10} or $(10) or is it simply not possible
te reference more than nine parameters this way?
- Say that the SELECT returns 1000 rows of 100MB each, is there a way
to avoid PQexecParams() from wanting to allocate 1000*100MB = 100GB
at once, and somehow extract the rows in smaller chunks?
(Incidentally, MySQL has such a facility).
I.e. we call libpq several times, and get a few rows at a time, which
are read from the DB-stream when needed.
--
Sincerely,
Stephen R. van den Berg.
From | Date | Subject | |
---|---|---|---|
Next Message | Abhijit Menon-Sen | 2008-06-30 11:25:15 | Re: Processing database query-results piecemeal |
Previous Message | Gregory Stark | 2008-06-30 11:07:04 | Re: TODO item: Allow data to be pulled directly from indexes |