From: | Sushrut Shivaswamy <sushrut(dot)shivaswamy(at)gmail(dot)com> |
---|---|
To: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Read table rows in chunks |
Date: | 2024-04-27 07:46:45 |
Message-ID: | CAH5mb99Ej7P9+tpR9_hPpO8rMEqJK264WCAW8uz6Lhm0PRH5VQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hey,
I"m trying to read the rows of a table in chunks to process them in a
background worker.
I want to ensure that each row is processed only once.
I was thinking of using the `SELECT * ... OFFSET {offset_size} LIMIT
{limit_size}` functionality for this but I"m running into issues.
Some approaches I had in mind that aren't working out:
- Try to use the transaction id to query rows created since the last
processed transaction id
- It seems Postgres does not expose row transaction ids so this
approach is not feasible
- Rely on OFFSET / LIMIT combination to query the next chunk of data
- SELECT * does not guarantee ordering of rows so it's possible older
rows repeat or newer rows are missed in a chunk
Can you please suggest any alternative to periodically read rows from a
table in chunks while processing each row exactly once.
Thanks,
Sushrut
From | Date | Subject | |
---|---|---|---|
Next Message | Anton A. Melnikov | 2024-04-27 08:27:01 | Re: Refactoring backend fork+exec code |
Previous Message | Marina Polyakova | 2024-04-27 06:59:22 | Re: cpluspluscheck/headerscheck require build in REL_16_STABLE |