From: | Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | PHJ file leak. |
Date: | 2019-11-11 12:24:18 |
Message-ID: | 20191111.212418.2222262873417235945.horikyota.ntt@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hello. While looking a patch, I found that PHJ sometimes complains for
file leaks if accompanied by LIMIT.
Repro is very simple:
create table t as (select a, a as b from generate_series(0, 999999) a);
analyze t;
select t.a from t join t t2 on (t.a = t2.a) limit 1;
Once in several (or dozen of) times execution of the last query
complains as follows.
WARNING: temporary file leak: File 15 still referenced
WARNING: temporary file leak: File 17 still referenced
This is using PHJ and the leaked file was a shared tuplestore for
outer tuples, which was opend by sts_parallel_scan_next() called from
ExecParallelHashJoinOuterGetTuple(). It seems to me that
ExecHashTableDestroy is forgeting to release shared tuplestore
accessors. Please find the attached.
regards.
--
Kyotaro Horiguchi
NTT Open Source Software Center
Attachment | Content-Type | Size |
---|---|---|
Make_PHJ_close_shtupstore.patch | text/x-patch | 737 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2019-11-11 12:26:36 | Re: cost based vacuum (parallel) |
Previous Message | Dilip Kumar | 2019-11-11 11:44:03 | Re: cost based vacuum (parallel) |