Re: Performance with very large tables

From: Alban Hertroys <alban(at)magproductions(dot)nl>
To: Jan van der Weijde <Jan(dot)van(dot)der(dot)Weijde(at)attachmate(dot)com>
Cc: Richard Huxton <dev(at)archonet(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Performance with very large tables
Date: 2007-01-15 11:49:03
Message-ID: 45AB6A2F.6050104@magproductions.nl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Jan van der Weijde wrote:
> Thank you.
> It is true he want to have the first few record quickly and then
> continue with the next records. However without LIMIT it already takes a
> very long time before the first record is returned.
> I reproduced this with a table with 1.1 million records on an XP machine
> and in my case it took about 25 seconds before the select returned the
> first record. I tried it both interactively with pgAdmin and with a
> C-application using a cursor (with hold). Both took about the same time.

Are you sure you don't retrieve the entire result set first, and only
start iterating it after that? Notably the fact that LIMIT changes this
behaviour seems to point in that direction.

A quick calculation shows that (provided my assumption holds true)
fetching each record takes about 12.5 usec on average (25s / 2m
records). A quick test on our dev-db fetches (~40k records) in 5 usec
average, so that looks reasonable to me (apples and oranges, I know).

--
Alban Hertroys
alban(at)magproductions(dot)nl

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
7500 AK Enschede

// Integrate Your World //

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Alan T. Miller 2007-01-15 12:06:55 Persistent connections in PHP with PDO
Previous Message Shoaib Mir 2007-01-15 11:37:25 Re: Performance with very large tables