From: | Stefan Keller <sfkeller(at)gmail(dot)com> |
---|---|
To: | Jonathan Vanasco <postgres(at)2xlp(dot)com> |
Cc: | PostgreSQL mailing lists <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: splitting up tables based on read/write frequency of columns |
Date: | 2015-01-19 22:07:57 |
Message-ID: | CAFcOn29KVFGjp7tJ746i8t7oLuT7px2BHfVkMiRsZrCFVKpmnA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi
I'm pretty sure PostgreSQL can handle this.
But since you asked with a theoretic background,
it's probably worthwhile to look at column stores (like [1]).
-S.
[*] http://citusdata.github.io/cstore_fdw/
2015-01-19 22:47 GMT+01:00 Jonathan Vanasco <postgres(at)2xlp(dot)com>:
> This is really a theoretical/anecdotal question, as I'm not at a scale yet where this would measurable. I want to investigate while this is fresh in my mind...
>
> I recall reading that unless a row has columns that are TOASTed, an `UPDATE` is essentially an `INSERT + DELETE`, with the previous row marked for vacuuming.
>
> A few of my tables have the following characteristics:
> - The Primary Key has many other tables/columns that FKEY onto it.
> - Many columns (30+) of small data size
> - Most columns (90%) are 1 WRITE(UPDATE) for 1000 READS
> - Some columns (10%) do a bit of internal bookkeeping and are 1 WRITE(UPDATE) for 50 READS
>
> Has anyone done testing/benchmarking on potential efficiency/savings by consolidating the frequent UPDATE columns into their own table?
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
From | Date | Subject | |
---|---|---|---|
Next Message | Tim Uckun | 2015-01-19 22:37:10 | Getting truncated queries from pg_stat_statements |
Previous Message | Robert DiFalco | 2015-01-19 21:58:26 | Re: Simple Atomic Relationship Insert |