From: | Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com> |
---|---|
To: | Jeremy Finzel <finzelj(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Backfill bgworker Extension? |
Date: | 2017-12-12 20:26:12 |
Message-ID: | b4df6eb4-0347-dc47-f052-7a500aaf9f78@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12/12/17 13:03, Jeremy Finzel wrote:
> To be clear, what I mean is batch updating a large set of data in small
> pieces so as to avoid things like lock contention and replication lags.
> Sometimes these have a driving table that has the source data to update
> in a destination table based on a key column, but sometimes it is
> something like setting just a single specific value for a huge table.
>
> I would love instead to have a Postgres extension that uses postgres
> background workers to accomplish this, especially if it were part of
> core. Before I venture into exploring writing something like this as an
> extension, would this ever be considered something appropriate as an
> extension in Postgres core? Would that be appropriate?
I don't see what the common ground between different variants of this
use case would be. Aren't you basically just looking to execute a
use-case-specific stored procedure in the background?
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-12-12 20:26:35 | Re: Learned Indexes in PostgreSQL? |
Previous Message | Peter Eisentraut | 2017-12-12 20:20:17 | Re: Error generating coverage report |