From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Dilip Kumar <dilipbalaut(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Segmentation fault when max_parallel degree is very High |
Date: | 2016-05-06 17:59:06 |
Message-ID: | CA+TgmoYKGcf5k1Xt-Ev7mCCP+4Cnmchn4FFSYaykKV+6RvFxnA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, May 4, 2016 at 11:01 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Dilip Kumar <dilipbalaut(at)gmail(dot)com> writes:
>> When parallel degree is set to very high say 70000, there is a segmentation
>> fault in parallel code,
>> and that is because type casting is missing in the code..
>
> I'd say the cause is not having a sane range limit on the GUC.
>
>> or corrupt some memory. Need to typecast
>> *i * PARALLEL_TUPLE_QUEUE_SIZE --> (Size)i * **PARALLEL_TUPLE_QUEUE_SIZE *and
>> this will fix
>
> That might "fix" it on 64-bit machines, but not 32-bit.
Yeah, I think what we should do here is use mul_size(), which will
error out instead of crashing.
Putting a range limit on the GUC is a good idea, too, but I like
having overflow checks built into these code paths as a backstop, in
case a value that we think is a safe upper limit turns out to be less
safe than we think ... especially on 32-bit platforms.
I'll go do that, and also limit the maximum parallel degree to 1024,
which ought to be enough for anyone (see what I did there?).
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2016-05-06 18:06:59 | pgsql: Add TAP tests for pg_dump |
Previous Message | Vladimir Gordiychuk | 2016-05-06 16:54:40 | Re: Stopping logical replication protocol |