From: | Bruno Wolff III <bruno(at)wolff(dot)to> |
---|---|
To: | Jan van der Weijde <Jan(dot)van(dot)der(dot)Weijde(at)attachmate(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Performance with very large tables |
Date: | 2007-01-16 18:06:38 |
Message-ID: | 20070116180638.GA15932@wolff.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Jan 15, 2007 at 11:52:29 +0100,
Jan van der Weijde <Jan(dot)van(dot)der(dot)Weijde(at)attachmate(dot)com> wrote:
> Does anyone have a suggestion for this problem ? Is there for instance
> an alternative to LIMIT/OFFSET so that SELECT on large tables has a good
> performance ?
Depending on exactly what you want to happen, you may be able to continue
where you left off using a condition on the primary key, using the last
primary key value for a row that you have viewed, rather than OFFSET.
This will still be fast and will not skip rows that are now visible to
your transaction (or show duplicates when deleted rows are no longer visible
to your transaction).
Another option would be to do all of the selects in a single serializable
transaction. This will use the same snapshot for all of the selects, so
you won't have rows appear or disappear on you do to other concurrent
transactions.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-01-16 18:07:21 | Re: Elegant copy of a row using PL |
Previous Message | Merlin Moncure | 2007-01-16 17:59:35 | Re: Elegant copy of a row using PL |