From: | Michel Pelletier <pelletier(dot)michel(at)gmail(dot)com> |
---|---|
To: | pabloa98 <pabloa98(at)gmail(dot)com> |
Cc: | "pgsql-generallists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: how to add more than 1600 columns in a table? |
Date: | 2019-04-25 02:26:30 |
Message-ID: | CACxu=vLmdefsLjTJFmz-EongnSdK1GYTCG5XkoYo6X+ELHe-2Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Apr 24, 2019 at 3:11 PM pabloa98 <pabloa98(at)gmail(dot)com> wrote:
> We used tables because we have 2 types of queries on this table:
>
> SELECT * FROM table_wih_lots_of_columns WHERE condition involving a lot of
> columns.
> These type of queries read lot of rows.
>
> or
>
> SELECT columnX FROM table_wih_lots_of_columns WHERE condition involving a
> lot of columns
> These type of queries read very few rows.
>
>
Everyone else has had great advice on this, I'd like to add that arrays of
any dimension are limited to 1GB like all varlena objects.
You should check out pg-strom, it's highly optimized for running exactly
these kinds of queries on a GPU and comes with a native matrix type that
can exceed the 1GB limit.
http://heterodb.github.io/pg-strom/
-Michel
>
>>
From | Date | Subject | |
---|---|---|---|
Next Message | rihad | 2019-04-25 09:29:31 | Does "ON UPDATE" for foreign keys require index? |
Previous Message | Gavin Flower | 2019-04-24 23:13:10 | Re: how to add more than 1600 columns in a table? |