From: | matshyeq <matshyeq(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | libpq - lack of support to set the fetch size |
Date: | 2014-03-09 13:43:35 |
Message-ID: | CAONr5=tK=omSo7B9jAMngED-9gvcemRRHe7_K7J=UoGjgCKxJw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello,
I've found an issue when tried to implement fetching rows from big table
(2mln rows) in my app.
Basically I don't find an elegant and easy way (other than always use
cursors) to limit the number of rows returned.
This causes my application to break due to the excessive memory consumption.
I'm using Perl and DBD::Pg library but contacted maintainer who actually
pointed out this is an issue that goes much deeper (libpq):
"Unfortunately, this is a limitation in the underlying driver (libpq)
rather than DBD::Pg itself. There have been talks over the years of
supporting this, but nothing concrete yet. Your best bet would be to ask
about this on the Postgres lists"
Would you consider putting this on the roadmap, so one day it gets improved?
Re, the details of the issue, I believe this has been well described at:
http://stackoverflow.com/questions/21960121/perl-dbdpg-script-fails-when-selecting-data-from-big-table
Kind Regards
~Msciwoj
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2014-03-09 23:32:14 | Re: log_statement per table |
Previous Message | Luca Ferrari | 2014-03-08 10:34:54 | Re: another trigger problem |