Re: cleanup temporary files after crash

From: "Euler Taveira" <euler(at)eulerto(dot)com>
To: "Tomas Vondra" <tomas(dot)vondra(at)enterprisedb(dot)com>, "Thomas Munro" <thomas(dot)munro(at)gmail(dot)com>, "Michael Paquier" <michael(at)paquier(dot)xyz>
Cc: "Euler Taveira" <euler(dot)taveira(at)2ndquadrant(dot)com>, "Anastasia Lubennikova" <a(dot)lubennikova(at)postgrespro(dot)ru>, "Tomas Vondra" <tomas(dot)vondra(at)2ndquadrant(dot)com>, "PostgreSQL Hackers" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: cleanup temporary files after crash
Date: 2021-03-18 20:06:23
Message-ID: 54663718-1d87-46f2-a27b-e7edacbfae0d@www.fastmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Mar 18, 2021, at 4:20 PM, Tomas Vondra wrote:
> I think a better way to test this would be to use a tuple lock:
I predicated such issues with this test. Your suggestion works for me. Maybe
you should use less rows in the session 2 query.

> setup:
>
> create table t (a int unique);
>
> session 1:
>
> begin;
> insert into t values (1);
> ... keep open ...
>
> session 2:
>
> begin;
> set work_mem = '64kB';
> insert into t select i from generate_series(1,10000) s(i);
> ... should block ...
>
> Then, once the second session gets waiting on the tuple, kill the
> backend. We might as well test that there actually is a temp file first,
> and then test that it disappeared.
Your suggestion works for me. Maybe you could use less rows in the session 2
query. I experimented with 1k rows and it generates a temporary file.

--
Euler Taveira
EDB https://www.enterprisedb.com/

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2021-03-18 20:13:16 Re: default result formats setting
Previous Message Robert Haas 2021-03-18 19:57:21 Re: [HACKERS] Custom compression methods