From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | Zhang Mingli <zmlpostgres(at)gmail(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: COPY FROM crash |
Date: | 2024-07-30 05:35:36 |
Message-ID: | CAApHDvpQ6t9ROcqbD-OgqR04Kfq4vQKw79Vo6r5j+ciHwsSfkA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, 30 Jul 2024 at 15:52, Zhang Mingli <zmlpostgres(at)gmail(dot)com> wrote:
> I have a test on Postgres and it has the similar issue(different places but same function).
>
> However it’s a little hard to reproduce because it happened when inserting next tuple after a previous copy multi insert buffer is flushed.
>
> To reproduce easily, change the Macros to:
>
> #define MAX_BUFFERED_TUPLES 1
> #define MAX_PARTITION_BUFFERS 0
I think you're going to need to demonstrate to us there's an actual
PostgreSQL bug here with a test case that causes a crash without
changing the above definitions.
It seems to me that it's not valid to set MAX_PARTITION_BUFFERS to
anything less than 2 due to the code inside
CopyMultiInsertInfoFlush(). If we find the CopyMultiInsertBuffer for
'curr_rri' then that code would misbehave if the list only contained a
single CopyMultiInsertBuffer due to the expectation there's another
item in the list after the list_delete_first(). If you're only able
to get it to misbehave by setting MAX_PARTITION_BUFFERS to less than
2, then my suggested fix would be to add a comment to say that values
less than to are not supported.
David
From | Date | Subject | |
---|---|---|---|
Next Message | Kirill Reshke | 2024-07-30 05:37:11 | Re: COPY FROM crash |
Previous Message | Amit Kapila | 2024-07-30 05:31:25 | Re: speed up a logical replica setup |