Re: postgresql v11.1 Segmentation fault: signal 11: by running SELECT... JIT Issue?

From: pabloa98 <pabloa98(at)gmail(dot)com>
To: Andrew Gierth <andrew(at)tao11(dot)riddles(dot)org(dot)uk>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: postgresql v11.1 Segmentation fault: signal 11: by running SELECT... JIT Issue?
Date: 2019-01-29 08:38:38
Message-ID: CAEjudX5LiF2XbTuhQ7KQK1ODSNcKz5ezOsT9jWyXZ2knN=wpuw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I found this article:

https://manual.limesurvey.org/Instructions_for_increasing_the_maximum_number_of_columns_in_PostgreSQL_on_Linux

It seems I should modify: uint8 t_hoff;
and replace it with something like: uint32 t_hoff; or uint64 t_hoff;

And perhaps should I modify this too?

The fix is easy enough, just adding a
v_hoff = LLVMBuildZExt(b, v_hoff, LLVMInt32Type(), "");
fixes the issue for me.
If that is the case, I am not sure what kind of modification we should do.

I feel I need to explain why we create these huge tables. Basically we want
to process big matrices for machine learning.
Using tables with classic columns let us write very clear code. If we have
to start using arrays as columns, things would become complicated and not
intuitive (besides, some columns store vectors as arrays... ).

We could use JSONB (we do, but for json documents). The problem is, storing
large amounts of jsonb columns create performance issues (compared with
normal tables).

Since almost everybody is doing ML to apply to different products, perhaps
are there other companies interested in a version of Postgres that could
deal with tables with thousands of columns?
I did not find any postgres package ready to use like that though.

Pablo

On Tue, Jan 29, 2019 at 12:11 AM pabloa98 <pabloa98(at)gmail(dot)com> wrote:

> I did not modify it.
>
> I guess I should make it bigger than 1765. is 2400 or 3200 fine?
>
> My apologies if my questions look silly. I do not know about the internal
> format of the database.
>
> Pablo
>
> On Mon, Jan 28, 2019 at 11:58 PM Andrew Gierth <
> andrew(at)tao11(dot)riddles(dot)org(dot)uk> wrote:
>
>> >>>>> "pabloa98" == pabloa98 <pabloa98(at)gmail(dot)com> writes:
>>
>> pabloa98> the table baseline_denull has 1765 columns,
>>
>> Uhh...
>>
>> #define MaxHeapAttributeNumber 1600 /* 8 * 200 */
>>
>> Did you modify that?
>>
>> (The back of my envelope says that on 64bit, the largest usable t_hoff
>> would be 248, of which 23 is fixed overhead leaving 225 as the max null
>> bitmap size, giving a hard limit of 1800 for MaxTupleAttributeNumber and
>> 1799 for MaxHeapAttributeNumber. And the concerns expressed in the
>> comments above those #defines would obviously apply.)
>>
>> --
>> Andrew (irc:RhodiumToad)
>>
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Andrew Gierth 2019-01-29 09:09:56 Re: postgresql v11.1 Segmentation fault: signal 11: by running SELECT... JIT Issue?
Previous Message pabloa98 2019-01-29 08:11:13 Re: postgresql v11.1 Segmentation fault: signal 11: by running SELECT... JIT Issue?