From: | Alfred Perlstein <alfred(at)freebsd(dot)org> |
---|---|
To: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Why we lost Uber as a user |
Date: | 2016-08-02 07:21:38 |
Message-ID: | 7c6c87fb-2220-423b-b29b-04a99db54b2c@freebsd.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 7/26/16 9:54 AM, Joshua D. Drake wrote:
> Hello,
>
> The following article is a very good look at some of our limitations
> and highlights some of the pains many of us have been working "around"
> since we started using the software.
>
> https://eng.uber.com/mysql-migration/
>
> Specifically:
>
> * Inefficient architecture for writes
> * Inefficient data replication
> * Issues with table corruption
> * Poor replica MVCC support
> * Difficulty upgrading to newer releases
>
> It is a very good read and I encourage our hackers to do so with an
> open mind.
>
> Sincerely,
>
> JD
>
It was a good read.
Having based a high performance web tracking service as well as a high
performance security appliance on Postgresql I too have been bitten by
these issues.
I had a few questions that maybe the folks with core knowledge can answer:
1) Would it be possible to create a "star-like" schema to fix this
problem? For example, let's say you have a table that is similar to Uber's:
col0pk, col1, col2, col3, col4, col5
All cols are indexed.
Assuming that updates happen to only 1 column at a time.
Why not figure out some way to encourage or automate the splitting of
this table into multiple tables that present themselves as a single table?
What I mean is that you would then wind up with the following tables:
table1: col0pk, col1
table2: col0pk, col2
table3: col0pk, col3
table4: col0pk, col4
table5: col0pk, col5
Now when you update "col5" on a row, you only have to update the index
on table5:col5 and table5:col0pk as opposed to beforehand where you
would have to update more indecies. In addition I believe that vacuum
would be somewhat mitigated as well in this case.
2) Why not have a look at how innodb does its storage, would it be
possible to do this?
3) For the small-ish table that Uber mentioned, is there a way to "have
it in memory" however provide some level of sync to disk so that it is
consistent?
thanks!
-Alfred
From | Date | Subject | |
---|---|---|---|
Next Message | Kyotaro HORIGUCHI | 2016-08-02 07:41:05 | Re: asynchronous and vectorized execution |
Previous Message | Alfred Perlstein | 2016-08-02 07:12:59 | Re: Why we lost Uber as a user |