From: | Björn Wittich <Bjoern_Wittich(at)gmx(dot)de> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | query a table with lots of coulmns |
Date: | 2014-09-19 11:51:33 |
Message-ID: | 541C18C5.3080204@gmx.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi mailing list,
I am relatively new to postgres. I have a table with 500 coulmns and
about 40 mio rows. I call this cache table where one column is a unique
key (indexed) and the 499 columns (type integer) are some values
belonging to this key.
Now I have a second (temporary) table (only 2 columns one is the key of
my cache table) and I want do an inner join between my temporary table
and the large cache table and export all matching rows. I found out,
that the performance increases when I limit the join to lots of small parts.
But it seems that the databases needs a lot of disk io to gather all 499
data columns.
Is there a possibilty to tell the databases that all these colums are
always treated as tuples and I always want to get the whole row? Perhaps
the disk oraganization could then be optimized?
Thank you for feedback and ideas
Best
Neo
From | Date | Subject | |
---|---|---|---|
Next Message | Szymon Guz | 2014-09-19 12:04:30 | Re: query a table with lots of coulmns |
Previous Message | Mark Kirkwood | 2014-09-19 07:53:27 | Re: postgres 9.3 vs. 9.4 |