Performance with very large tables

From: "Jan van der Weijde" <Jan(dot)van(dot)der(dot)Weijde(at)attachmate(dot)com>
To: <pgsql-general(at)postgresql(dot)org>
Subject: Performance with very large tables
Date: 2007-01-15 10:52:29
Message-ID: 4B9C73D1EB78FE4A81475AE8A553B3C67DC532@exch-lei1.attachmate.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello all,

one of our customers is using PostgreSQL with tables containing millions
of records. A simple 'SELECT * FROM <table>' takes way too much time in
that case, so we have advised him to use the LIMIT and OFFSET clauses.
However now he has a concurrency problem. Records deleted, added or
updated in one process have an influence on the OFFSET value of another
process such that records are either skipped of read again.
The solution to that problem is to use transactions with isolation level
serializable. But to use transactions around a loop that reads millions
of records is far from ideal I think.
Does anyone have a suggestion for this problem ? Is there for instance
an alternative to LIMIT/OFFSET so that SELECT on large tables has a good
performance ?

Thank you for your help

Jan van der Weijde

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Richard Huxton 2007-01-15 11:00:50 Re: Performance with very large tables
Previous Message Shoaib Mir 2007-01-15 09:40:12 Re: Backup the part of postgres database