From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Pg Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: coverage increase for worker_spi |
Date: | 2019-05-29 22:39:36 |
Message-ID: | 41618.1559169576@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> writes:
> Tom pointed out that coverage for worker_spi is 0%. For a module that
> only exists to provide coverage, that's pretty stupid. This patch
> increases coverage to 90.9% line-wise and 100% function-wise, which
> seems like a sufficient starting point.
> How would people feel about me getting this in master at this point in
> the cycle, it being just some test code? We can easily revert if
> it seems too unstable.
I'm not opposed to adding a new test case at this point in the cycle,
but as written this one seems more or less guaranteed to fail under
load. You can't just sleep for worker_spi.naptime and expect that
the worker will certainly have run.
Perhaps you could use a plpgsql DO block with a loop to wait up
to X seconds until the expected state appears, for X around 120
to 180 seconds (compare poll_query_until in the TAP tests).
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Donald Dong | 2019-05-30 00:33:59 | Print baserestrictinfo for varchar fields |
Previous Message | Tom Lane | 2019-05-29 22:25:09 | Re: Why does SPI_connect change the memory context? |