Re: Read table rows in chunks

From: Kashif Zeeshan <kashi(dot)zeeshan(at)gmail(dot)com>
To: Sushrut Shivaswamy <sushrut(dot)shivaswamy(at)gmail(dot)com>
Cc: pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Read table rows in chunks
Date: 2024-04-27 17:04:27
Message-ID: CAAPsdhcbdxhCsHDsVGf7gwb_x6UtjxYUaePoX7e9QWCJkR2rjQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi

You can also use the following approaches.

1. Cursors
2. FETCH with OFFSET clause

Regards
Kashif Zeeshan
Bitnine Global

On Sat, Apr 27, 2024 at 12:47 PM Sushrut Shivaswamy <
sushrut(dot)shivaswamy(at)gmail(dot)com> wrote:

> Hey,
>
> I"m trying to read the rows of a table in chunks to process them in a
> background worker.
> I want to ensure that each row is processed only once.
>
> I was thinking of using the `SELECT * ... OFFSET {offset_size} LIMIT
> {limit_size}` functionality for this but I"m running into issues.
>
> Some approaches I had in mind that aren't working out:
> - Try to use the transaction id to query rows created since the last
> processed transaction id
> - It seems Postgres does not expose row transaction ids so this
> approach is not feasible
> - Rely on OFFSET / LIMIT combination to query the next chunk of data
> - SELECT * does not guarantee ordering of rows so it's possible
> older rows repeat or newer rows are missed in a chunk
>
> Can you please suggest any alternative to periodically read rows from a
> table in chunks while processing each row exactly once.
>
> Thanks,
> Sushrut
>
>
>
>

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message David G. Johnston 2024-04-27 17:46:16 Re: Read table rows in chunks
Previous Message Sushrut Shivaswamy 2024-04-27 15:55:59 Re: Background Processes in Postgres Extension