From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | dandl <david(at)andl(dot)org> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: What limits Postgres performance when the whole database lives in cache? |
Date: | 2016-09-02 17:10:35 |
Message-ID: | CAOR=d=11Sy1iqHU3Lknc_dX2SB1CCqZJy3_UbLgZ5cpGOF7OEA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Sep 2, 2016 at 4:49 AM, dandl <david(at)andl(dot)org> wrote:
> Re this talk given by Michael Stonebraker:
>
> http://slideshot.epfl.ch/play/suri_stonebraker
>
>
>
> He makes the claim that in a modern ‘big iron’ RDBMS such as Oracle, DB2, MS
> SQL Server, Postgres, given enough memory that the entire database lives in
> cache, the server will spend 96% of its memory cycles on unproductive
> overhead. This includes buffer management, locking, latching (thread/CPU
> conflicts) and recovery (including log file reads and writes).
>
>
>
> [Enough memory in this case assumes that for just about any business, 1TB is
> enough. The intent of his argument is that a server designed correctly for
> it would run 25x faster.]
>
>
>
> I wondered if there are any figures or measurements on Postgres performance
> in this ‘enough memory’ environment to support or contest this point of
> view?
What limits postgresql when everything fits in memory? The fact that
it's designed to survive a power outage and not lose all your data.
Stonebraker's new stuff is cool, but it is NOT designed to survive
total power failure.
Two totally different design concepts. It's apples and oranges to compare them.
From | Date | Subject | |
---|---|---|---|
Next Message | Alexander Farber | 2016-09-02 17:21:15 | RETURNS TABLE function returns nothingness |
Previous Message | David Gibbons | 2016-09-02 16:44:07 | Re: 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2 |