From: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
---|---|
To: | Lev Kokotov <lev(dot)kokotov(at)gmail(dot)com> |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: Use Postgres as a column store by creating one table per column |
Date: | 2019-05-22 04:43:09 |
Message-ID: | 20190522044309.GI4426@telsasoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, May 21, 2019 at 09:28:07PM -0700, Lev Kokotov wrote:
> Is it efficient to use Postgres as a column store by creating one table per
> column?
>
> I would query it with something like `[...] UNION SELECT value AS <table>
> FROM <table> WHERE value = <value> UNION [...]` to build a row.
I think you mean JOIN not UNION.
It'd be awful (At one point I tried it very briefly). If you were joining 2,
10 column tables, that'd be 19 joins. I imagine the tables would be "serial id
unique, float value" or similar, so the execution might not be terrible, as
it'd be using an index lookup for each column. But the planner would suffer,
badly. Don't even try to read EXPLAIN.
Actually, the execution would also be hitting at least 2x files per "column"
(one for the index and one for the table data), so that's not great.
Also, the overhead of a 2-column table is high, so your DB would be much bigger
and have very high overhead. Sorry to reference a 2ndary source, but..
https://stackoverflow.com/questions/13570613/making-sense-of-postgres-row-sizes
> I'm thinking since Postgres stores tables in continuous blocks of 16MB each
> (I think that's the default page size?) I would get efficient reads and
> with parallel queries I could benefit from multiple cores.
Default page size is 8kb
Justin
From | Date | Subject | |
---|---|---|---|
Next Message | Franklin Haut | 2019-05-22 12:50:52 | Re: Log size in bytes of query result |
Previous Message | Lev Kokotov | 2019-05-22 04:28:07 | Use Postgres as a column store by creating one table per column |