Re: SELECT very slow

From: Volkan YAZICI <volkan(dot)yazici(at)gmail(dot)com>
To: Thomas Kellerer <spam_eater(at)gmx(dot)net>
Cc: pgsql-sql(at)postgresql(dot)org
Subject: Re: SELECT very slow
Date: 2005-06-09 07:31:50
Message-ID: 7104a7370506090031c8617d3@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql

Hi,

On 6/9/05, Thomas Kellerer <spam_eater(at)gmx(dot)net> wrote:
> No I want the whole result.

As Tom underlined:

On 6/9/05, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> The solution is to use a cursor and FETCH a reasonably
> small number of rows at a time.

AFAIC, query results are stored as arrays in PGresult structures.
Thus, storing huge result sets in a single struct is not very
feasible; although, you can face with theoretical limits like MAX_INT
in the long run. Moreover, it's so rare to see any practical use of
retrieving thousands of rows. If you're getting quite huge sets of
data, you should try grouping them with suitable statements.

IMHO, you should use cursors to fetch a suitable amount of row from
related table and forward it recursively. (Furthermore, I think this
is one of the design goals of FETCH mechanism.)

Although, as I see from most API implementations (like C++, Perl, PHP,
Python and etc.), they use libpq as layer between API and server.
Therefore, you'll probably encounter with every limitation of libpq
with other programming languages out of C too.

Regards.

In response to

Browse pgsql-sql by date

  From Date Subject
Next Message Bruno Wolff III 2005-06-09 08:29:15 Re: [despammed] rejecting characters in a field
Previous Message Richard Huxton 2005-06-09 07:27:56 Re: Indices and user defined operators