From: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com> |
---|---|
To: | Euler Taveira <euler(at)eulerto(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Michael Paquier <michael(at)paquier(dot)xyz> |
Cc: | Euler Taveira <euler(dot)taveira(at)2ndquadrant(dot)com>, Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Subject: | Re: cleanup temporary files after crash |
Date: | 2021-03-18 20:51:11 |
Message-ID: | 1ec33f3d-ea4c-28c8-502c-3eb0a008a6ac@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 3/18/21 9:06 PM, Euler Taveira wrote:
> On Thu, Mar 18, 2021, at 4:20 PM, Tomas Vondra wrote:
>> I think a better way to test this would be to use a tuple lock:
> I predicated such issues with this test. Your suggestion works for me. Maybe
> you should use less rows in the session 2 query.
>
>> setup:
>>
>> create table t (a int unique);
>>
>> session 1:
>>
>> begin;
>> insert into t values (1);
>> ... keep open ...
>>
>> session 2:
>>
>> begin;
>> set work_mem = '64kB';
>> insert into t select i from generate_series(1,10000) s(i);
>> ... should block ...
>>
>> Then, once the second session gets waiting on the tuple, kill the
>> backend. We might as well test that there actually is a temp file first,
>> and then test that it disappeared.
> Your suggestion works for me. Maybe you could use less rows in the session 2
> query. I experimented with 1k rows and it generates a temporary file.
>
OK. Can you prepare a patch with the proposed test approach?
FWIW I can reproduce this on a 32-bit ARM system (rpi4), where 500 rows
simply does not use a temp file, and with 1000 rows it works fine. On
the x86_64 the temp file is created even with 500 rows. So there clearly
is some platform dependency, not sure if it's due to 32/64 bits,
alignment or something else. In any case, the 500 rows seems to be just
on the threshold.
We need to do both - stop using the timing and increase the number of
rows, to consistently get temp files.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Euler Taveira | 2021-03-18 21:00:14 | Re: cleanup temporary files after crash |
Previous Message | Peter Geoghegan | 2021-03-18 20:46:49 | Maintaining a list of pgindent commits for "git blame" to ignore |