Re: Performance with very large tables

From: Bruno Wolff III <bruno(at)wolff(dot)to>
To: Jan van der Weijde <Jan(dot)van(dot)der(dot)Weijde(at)attachmate(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Performance with very large tables
Date: 2007-01-16 18:11:58
Message-ID: 20070116181158.GB15932@wolff.to
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Jan 16, 2007 at 12:06:38 -0600,
Bruno Wolff III <bruno(at)wolff(dot)to> wrote:
>
> Depending on exactly what you want to happen, you may be able to continue
> where you left off using a condition on the primary key, using the last
> primary key value for a row that you have viewed, rather than OFFSET.
> This will still be fast and will not skip rows that are now visible to
> your transaction (or show duplicates when deleted rows are no longer visible
> to your transaction).

I should have mentioned that you also will need to use an ORDER BY clause
on the primary key when doing things this way.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Ed L. 2007-01-16 18:13:00 lock query
Previous Message Lenorovitz, Joel 2007-01-16 18:10:25 Temp Table Within PLPGSQL Function - Something Awry