From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Noah Misch <noah(at)leadboat(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: [RFC] Removing "magic" oids |
Date: | 2018-11-20 09:20:04 |
Message-ID: | 20181120092004.f3j2znfodallkksn@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2018-11-14 21:02:41 -0800, Andres Freund wrote:
> Hi,
>
> On 2018-11-15 04:57:28 +0000, Noah Misch wrote:
> > On Wed, Nov 14, 2018 at 12:01:52AM -0800, Andres Freund wrote:
> > > - one pgbench test tested concurrent insertions into a table with
> > > oids, as some sort of stress test for lwlocks and spinlocks. I *think*
> > > this doesn't really have to be a system oid column, and this was just
> > > because that's how we triggered a bug on some machine. Noah, do I get
> > > this right?
> >
> > The point of the test is to exercise OidGenLock by issuing many parallel
> > GetNewOidWithIndex() and verifying absence of duplicates. There's nothing
> > special about OidGenLock, but it is important to use an operation that takes a
> > particular LWLock many times, quickly. If the test query spends too much time
> > on things other than taking locks, it will catch locking races too rarely.
>
> Sequences ought to do that, too. And if it's borked, we'd hopefully see
> unique violations. But it's definitely not a 1:1 replacement.
>
>
> > > pgbench(
> > > '--no-vacuum --client=5 --protocol=prepared --transactions=25',
> > > 0,
> > > [qr{processed: 125/125}],
> > > [qr{^$}],
> > > - 'concurrency OID generation',
> > > + 'concurrent insert generation',
> > > {
> > > - '001_pgbench_concurrent_oid_generation' =>
> > > - 'INSERT INTO oid_tbl SELECT FROM generate_series(1,1000);'
> > > + '001_pgbench_concurrent_insert' =>
> > > + 'INSERT INTO insert_tbl SELECT FROM generate_series(1,1000);'
> >
> > The code for sequences is quite different, so this may or may not be an
> > effective replacement. To study that, you could remove a few barriers from
> > lwlock.c, measure how many iterations today's test needs to catch the
> > mutation, and then measure the same for this proposal.
>
> Unfortunately it's really hard to hit barrier issues on x86. I think
> that's the only arch I currently have access to, but it's possible I
> have access to some ppc too. If you have a better idea for a
> replacement test, I'd be all ears.
I've tested this on ppc. Neither the old version nor the new version
stress test spinlocks sufficiently to error out with weakened spinlocks
(not that surprising, there are no spinlocks in any hot path of either
workload). Both versions very reliably trigger on weakened lwlocks. So I
think we're comparatively good on that front.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | John Naylor | 2018-11-20 09:39:17 | Re: Sync ECPG scanner with core |
Previous Message | Christoph Berg | 2018-11-20 08:52:33 | Re: Patch to avoid SIGQUIT accident |