| From: | Greg Spiegelberg <gspiegelberg(at)gmail(dot)com> |
|---|---|
| To: | Yves Dorfsman <yves(at)zioup(dot)com> |
| Cc: | pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
| Subject: | Re: Millions of tables |
| Date: | 2016-09-26 13:53:16 |
| Message-ID: | CAEtnbpWCSM81Sf2_DFQ6Xio9FfrkWuacELoRiYuLnJr2radDFw@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Consider the problem though. Random access to trillions of records with no
guarantee any one will be fetched twice in a short time frame nullifies the
effectiveness of a cache unless the cache is enormous. If such a cache
were that big, 100's of TB's, I wouldn't be looking at on-disk storage
options. :)
-Greg
On Mon, Sep 26, 2016 at 6:54 AM, Yves Dorfsman <yves(at)zioup(dot)com> wrote:
> Something that is not talked about at all in this thread is caching. A
> bunch
> of memcache servers in front of the DB should be able to help with the 30ms
> constraint (doesn't have to be memcache, some caching technology).
>
> --
> http://yves.zioup.com
> gpg: 4096R/32B0F416
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Greg Spiegelberg | 2016-09-26 13:57:11 | Re: Millions of tables |
| Previous Message | Greg Spiegelberg | 2016-09-26 13:51:10 | Re: Millions of tables |