Re: Uber migrated from Postgres to MySQL

From: James Keener <jim(at)jimkeener(dot)com>
To: John R Pierce <pierce(at)hogranch(dot)com>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Uber migrated from Postgres to MySQL
Date: 2016-07-28 05:08:31
Message-ID: CAG8g3twjTudcdNkynR5SOT3CYwimnSxX0MjVQmHiYJ6LCfQzuQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

So, millions is a lot, but it's not difficult to get to a place where
you have thousands or tables.

Image a case in which census data and the associated geometries.
https://github.com/censusreporter/census-postgres has 22 surveys, each
with 230+ tables. That's 5000+ tables right there. Now, the TIGER
tables for all of that is another 50 tables per year, so another 350
tables.

If these were to be partitioned by state, instead of all records for
all states in a single table, then we're looking at 270,000.

Jim

On Thu, Jul 28, 2016 at 12:48 AM, John R Pierce <pierce(at)hogranch(dot)com> wrote:
> On 7/27/2016 9:39 PM, Jeff Janes wrote:
>>
>> That depends on how how many objects there are consuming that 1 TB.
>> With millions of small objects, you will have problems. Not as many
>> in 9.5 as there were in 9.1, but still it does not scale linearly in
>> the number of objects. If you only have thousands of objects, then as
>> far as I know -k works like a charm.
>
>
> millions of tables? thats akin to having millions of classes in an object
> oriented program, seems a bit excessive.
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Jason Dusek 2016-07-28 05:52:41 Re: Uber migrated from Postgres to MySQL
Previous Message John R Pierce 2016-07-28 04:48:32 Re: Uber migrated from Postgres to MySQL