Re: Millions of tables

From: Stuart Bishop <stuart(at)stuartbishop(dot)net>
To: Greg Spiegelberg <gspiegelberg(at)gmail(dot)com>
Cc: "pgsql-performa(dot)" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Millions of tables
Date: 2016-09-26 14:21:22
Message-ID: CADmi=6NOA+YbzBTuFmmuQ7Kw61YkAPpYS14w9McjpPivwz-p7g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 26 September 2016 at 20:51, Greg Spiegelberg <gspiegelberg(at)gmail(dot)com>
wrote:

>
> An alternative if you exhaust or don't trust other options, use a foreign
>> data wrapper to access your own custom storage. A single table at the PG
>> level, you can shard the data yourself into 8 bazillion separate stores, in
>> whatever structure suites your read and write operations (maybe reusing an
>> embedded db engine, ordered flat file+log+index, whatever).
>>
>>
> However even 8 bazillion FDW's may cause an "overflow" of relationships at
> the loss of having an efficient storage engine acting more like a traffic
> cop. In such a case, I would opt to put such logic in the app to directly
> access the true storage over using FDW's.
>

I mean one fdw table, which shards internally to 8 bazillion stores on
disk. It has the sharding key, can calculate exactly which store(s) need to
be hit, and returns the rows and to PostgreSQL it looks like 1 big table
with 1.3 trillion rows. And if it doesn't do that in 30ms you get to blame
yourself :)

--
Stuart Bishop <stuart(at)stuartbishop(dot)net>
http://www.stuartbishop.net/

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Greg Spiegelberg 2016-09-26 14:24:30 Re: Millions of tables
Previous Message Greg Spiegelberg 2016-09-26 14:09:04 Re: Millions of tables