From: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
---|---|
To: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: proposal: possibility to read dumped table's name from file |
Date: | 2020-05-29 18:25:15 |
Message-ID: | 20200529182514.GL17850@telsasoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, May 29, 2020 at 04:21:00PM +0200, Pavel Stehule wrote:
> one my customer has to specify dumped tables name by name. After years and
> increasing database size and table numbers he has problem with too short
> command line. He need to read the list of tables from file (or from stdin).
+1 - we would use this.
We put a regex (actually a pg_dump pattern) of tables to skip (timeseries
partitions which are older than a few days and which are also dumped once not
expected to change, and typically not redumped). We're nowhere near the
execve() limit, but it'd be nice if the command was primarily a list of options
and not a long regex.
Please also support reading from file for --exclude-table=pattern.
I'm drawing a parallel between this and rsync --include/--exclude and --filter.
We'd be implementing a new --filter, which might have similar syntax to rsync
(which I always forget).
--
Justin
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2020-05-29 18:51:54 | Re: pie-in-sky idea: 'sensitive' function parameters |
Previous Message | Alvaro Herrera | 2020-05-29 18:14:44 | Re: Expand the use of check_canonical_path() for more GUCs |