From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | Björn Wittich <Bjoern_Wittich(at)gmx(dot)de> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: query a table with lots of coulmns |
Date: | 2014-09-19 13:32:00 |
Message-ID: | CAFj8pRDufkFjWkdHUWOfzHVJ71E0iVHuRdLuiL0K3j9fgHjpSA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
2014-09-19 13:51 GMT+02:00 Björn Wittich <Bjoern_Wittich(at)gmx(dot)de>:
> Hi mailing list,
>
> I am relatively new to postgres. I have a table with 500 coulmns and about
> 40 mio rows. I call this cache table where one column is a unique key
> (indexed) and the 499 columns (type integer) are some values belonging to
> this key.
>
> Now I have a second (temporary) table (only 2 columns one is the key of my
> cache table) and I want do an inner join between my temporary table and
> the large cache table and export all matching rows. I found out, that the
> performance increases when I limit the join to lots of small parts.
> But it seems that the databases needs a lot of disk io to gather all 499
> data columns.
> Is there a possibilty to tell the databases that all these colums are
> always treated as tuples and I always want to get the whole row? Perhaps
> the disk oraganization could then be optimized?
>
sorry for offtopic
array databases are maybe better for your purpose
http://rasdaman.com/
http://www.scidb.org/
>
>
> Thank you for feedback and ideas
> Best
> Neo
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2014-09-19 17:15:34 | Re: Yet another abort-early plan disaster on 9.3 |
Previous Message | Björn Wittich | 2014-09-19 12:48:03 | Re: query a table with lots of coulmns |