From: | Yeb Havinga <yebhavinga(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: C libpq frontend library fetchsize |
Date: | 2010-03-18 16:54:25 |
Message-ID: | 4BA25AC1.4080406@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas wrote:
> On Fri, Feb 26, 2010 at 3:28 PM, Yeb Havinga <yebhavinga(at)gmail(dot)com> wrote:
>
>> I'm wondering if there would be community support for adding using the
>> execute message with a rownum > 0 in the c libpq client library, as it is
>> used by the jdbc driver with setFetchSize.
>>
>
> Not sure I follow what you're asking... what would the new/changed
> function signature be?
>
Hello Robert, list
I'm sorry I did not catch your reply until I searched in the archives on
libpq, I hope you are not offended. However I think the question is
answered somewhat in a reply I sent to Takahiro Itagaki, viz:
http://archives.postgresql.org/pgsql-hackers/2010-03/msg00015.php
The recent posting in PERFORM where someone compares mysql vs postgresql
speed is caused by libpq / whole pgresult as one time.
(http://archives.postgresql.org/pgsql-performance/2010-03/msg00228.php)
ISTM that using cursors and then fetch is not an adequate solution,
because 1) someone must realise that the pgresult object is
gathered/transfered under the hood of libpq completely before the first
row can be used by the application 2) the structure of the application
layer is altered to make use of partial results.
What if the default operation of e.g. php using libpq would be as
follows: set some default fetchsize (e.g. 1000 rows), then just issue
getrow. In the php pg handling, a function like getnextrow would wait
for the first pgresult with 1000 rows. Then if the pgresult is depleted
or almost depleted, request the next pgresult automatically. I see a lot
of benefits like less memory requirements in libpq, less new users with
why is my query so slow before the first row, and almost no concerns. A
small overhead of row description messages perhaps. Maybe the biggest
benefit of a pgsetfetchsize api call would be to raise awareness that of
the fact that pgresults are transfered completely (or partially if there
is animo for me/collegue of mine working on a patch for this).
Besides that, another approach to get data to clients faster could be by
perhaps using lzo, much in the same way that google uses zippy (see e.g.
http://feedblog.org/2008/10/12/google-bigtable-compression-zippy-and-bmdiff/)
to speed up data transfer and delivery. LZO has been mentioned before on
mailing lists for pg_dump compression, but I think that with a
--enable-lzo also libpq could benefit too.
(http://archives.postgresql.org/pgsql-performance/2009-08/msg00053.php)
regards,
Yeb Havinga
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-03-18 17:00:01 | Re: C libpq frontend library fetchsize |
Previous Message | Pavel Stehule | 2010-03-18 16:05:15 | Re: WIP: shared ispell dictionary |