From: | <ofir(dot)manor(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | "Shulgin, Oleksandr" <oleksandr(dot)shulgin(at)zalando(dot)de>, Pg Bugs <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: BUG #13725: Logical Decoding - wrong results with large transactions and unfortunate timing |
Date: | 2015-10-26 13:05:52 |
Message-ID: | CAPL_MpPTaxc6mU-bLYtxkG23qk5aNftd6Q9GbQdK3GtyT5JV-A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
>
>
> > 2. I have validated that a single call to pg_logical_slot_get_changes
> > returns a result set with duplicates, going back to the start (I've seen
> it
> > with a Java debugger, looping over the forward-only cursor of the SELECT
> > from the replication slot). That is the bug I'm reporting - not across
> > calls but within a single call.
>
> That'd be rather weird. I'll look into it, but due to the pgconf.eu
> conference this week I can't promise I'll get to it this week.
>
>
> Andres
>
Sure, triage can wait, make the most out of the conference and thanks for
the fast replies!
I did update the test case to prove that all output comes from a single
call. I added a different constant to each SQL query - the bash loop
iteration number:
for i in `seq 1 10000`; do
echo SELECT $i ",v.* FROM pg_logical_slot_get_changes('test2_slot', NULL,
NULL) v;"
done | psql --quiet --tuples-only | cat --squeeze-blank > out2
cat out2 | wc -l
You can see all output comes with the same SQL (in this case, statement
419).
419 | 1/76344D18 | 450880 | BEGIN 450880
419 | 1/76344D18 | 450880 | table public.test: INSERT: id[integer]:1
v[character varying]:'3'
419 | 1/76344DE0 | 450880 | table public.test: INSERT: id[integer]:2
v[character varying]:'6'
...
419 | 1/76BD9348 | 450880 | table public.test: INSERT:
id[integer]:61439 v[character varying]:'184317'
419 | 1/76BD93D8 | 450880 | table public.test: INSERT:
id[integer]:61440 v[character varying]:'184320'
419 | 1/76344D18 | 450880 | table public.test: INSERT: id[integer]:1
v[character varying]:'3'
419 | 1/76344DE0 | 450880 | table public.test: INSERT: id[integer]:2
v[character varying]:'6'
...
419 | 1/7713BEB0 | 450880 | table public.test: INSERT:
id[integer]:99999 v[character varying]:'299997'
419 | 1/7713BF40 | 450880 | table public.test: INSERT:
id[integer]:100000 v[character varying]:'300000'
419 | 1/7713C028 | 450880 | COMMIT 450880
--
Ofir Manor
Blog: http://ofirm.wordpress.com <http://ofirm.wordpress.com>
LinkedIn: http://il.linkedin.com/in/ofirmanor
Twitter: @ofirm Mobile: +972-54-7801286
From | Date | Subject | |
---|---|---|---|
Next Message | chenhj | 2015-10-26 15:09:13 | Re: BUG #13723: "duplicate key" error occurs when update delete and insert the same row concurrently |
Previous Message | Andres Freund | 2015-10-26 12:39:35 | Re: BUG #13725: Logical Decoding - wrong results with large transactions and unfortunate timing |