From: | "Jan van der Weijde" <Jan(dot)van(dot)der(dot)Weijde(at)attachmate(dot)com> |
---|---|
To: | "Bruno Wolff III" <bruno(at)wolff(dot)to>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Performance with very large tables |
Date: | 2007-01-23 16:13:15 |
Message-ID: | 4B9C73D1EB78FE4A81475AE8A553B3C67DC54E@exch-lei1.attachmate.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi Bruno,
Good to read that your advice to me is the solution I was considering!
Although I think this is something PostgreSQL should solve internally, I
prefer the WHERE clause over a long lasting SERIALIZABLE transaction.
Thanks,
Jan
-----Original Message-----
From: Bruno Wolff III [mailto:bruno(at)wolff(dot)to]
Sent: Tuesday, January 16, 2007 19:12
To: Jan van der Weijde; pgsql-general(at)postgresql(dot)org
Subject: Re: [GENERAL] Performance with very large tables
On Tue, Jan 16, 2007 at 12:06:38 -0600,
Bruno Wolff III <bruno(at)wolff(dot)to> wrote:
>
> Depending on exactly what you want to happen, you may be able to
continue
> where you left off using a condition on the primary key, using the
last
> primary key value for a row that you have viewed, rather than OFFSET.
> This will still be fast and will not skip rows that are now visible to
> your transaction (or show duplicates when deleted rows are no longer
visible
> to your transaction).
I should have mentioned that you also will need to use an ORDER BY
clause
on the primary key when doing things this way.
From | Date | Subject | |
---|---|---|---|
Next Message | Steven De Vriendt | 2007-01-23 16:22:40 | PostgreSQL 8.1: createdb: xflush error ? |
Previous Message | Tom Lane | 2007-01-23 16:06:21 | Re: too many trigger records found for relation "item" - |