From: | Corey Huinker <corey(dot)huinker(at)gmail(dot)com> |
---|---|
To: | pierre(dot)ducroquet(at)people-doc(dot)com |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: [RFC] Add an until-0 loop in psql |
Date: | 2018-04-27 19:40:25 |
Message-ID: | CADkLM=fELaazmN2sc0n4UkstHt=9XeckO-xVc5xMXot6bKA0Rg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
As of v11, DO blocks can do transactions. I think this will meet your needs.
A loop that starts at point X in the code and terminates at point Y has to
know how to jump back in the file (if there even is a file!) to point X and
re-interpret commands as it makes it's way back through the "file" toward
point Y again... a place it might not reach, or it might reach some other
loop termination first. \commands can be hidden inside psql variables,
files can be conditionally included based on \if statements, and those
files might have loop starters/terminators in them. And those commands *are*
processed.
That, or you'd have to capture the code by somehow parsing ahead to the
next \until-0 (processing all inner loops as you go, and the files they
include, etc), but that means that variables that were expanded the first
time are *not* expanded on subsequent iterations, and that makes it hard to
set an exit-condition variable. It would also seriously alter what psql is
when inside that loop.
I once did a presentation on ways to (ab)use psql, and one thing I did was
recursion via include files. Adapting your loop as literally as possible,
it would look like this:
loop_file.sql:
BEGIN;
WITH deleted_rows AS (DELETE FROM big_table
WHERE id in (SELECT id FROM big_table WHERE bad =
true LIMIT 1000)
RETURNING 1)
SELECT (COUNT(*) > 0) as deleted_some_rows FROM deleted_rows
\gset
VACUUM big_table;
COMMIT;
\if :deleted_some_rows
\include loop_file.sql
\endif
What you don't see here is that you're using your psql process's available
open file handles as a stack, and when you hit that limit psql will fail.
If you remove that limit, then you get a bit further before psql segfaults
on you. I think I got ~2700 files deep before that happened. Your stackage
may vary.
I'm not saying this is a good solution, quite the contrary. I think the
sane solution is right around the corner in Version 11.
Now if we just had a way of passing parameters into DO blocks...
On Tue, Apr 24, 2018 at 3:59 AM Pierre Ducroquet <
pierre(dot)ducroquet(at)people-doc(dot)com> wrote:
> Hi
>
> When running database migrations with .sql files on a live database, it's
> not
> uncommon to have to run a migration in a loop to prevent a big lock on a
> table.
> For instance if one want to delete some old datas from a big table one
> would
> write :
>
> DELETE FROM big_table WHERE id IN (SELECT id FROM big_table WHERE bad =
> true
> LIMIT 1000);
> VACUUM big_table;
>
> Right now, doing this is quite inefficient. We either have to write a
> script
> in another language, or run psql in a shell loop and wait for the
> migration to
> stop altering rows.
>
> The attached **proof of concept** patch (I insist, it's a 15 minutes hack
> sprint with no previous knowledge of psql code) implements an 'until-0'
> loop
> in psql.
> The previous migration could be simply written as :
>
> \until-0
> BEGIN;
> DELETE FROM big_table WHERE id IN (SELECT id FROM big_table WHERE bad =
> true
> LIMIT 1000);
> VACUUM big_table;
> COMMIT;
> \end-until
>
> And psql will execute it until there is no row affected in the inner
> queries.
>
> I am willing to write a proper patch for this (I hope the tell/seek is an
> acceptable implementation…), but I prefer having some feedback first.
>
> Thanks
>
> Pierre
From | Date | Subject | |
---|---|---|---|
Next Message | legrand legrand | 2018-04-27 20:04:59 | Re: Issues while building PG in MS Windows, using MSYS2 and MinGW-w64 |
Previous Message | Kefan Yang | 2018-04-27 18:23:54 | RE: GSoC 2018: Sorting Algorithm and Benchmarking |