| From: | "dandl" <david(at)andl(dot)org> | 
|---|---|
| To: | |
| Cc: | "'pgsql-general'" <pgsql-general(at)postgresql(dot)org> | 
| Subject: | What limits Postgres performance when the whole database lives in cache? | 
| Date: | 2016-09-02 10:49:12 | 
| Message-ID: | 000e01d20507$a5bafa10$f130ee30$@andl.org | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-general | 
Re this talk given by Michael Stonebraker:
http://slideshot.epfl.ch/play/suri_stonebraker
He makes the claim that in a modern ‘big iron’ RDBMS such as Oracle, DB2, MS SQL Server, Postgres, given enough memory that the entire database lives in cache, the server will spend 96% of its memory cycles on unproductive overhead. This includes buffer management, locking, latching (thread/CPU conflicts) and recovery (including log file reads and writes).
[Enough memory in this case assumes that for just about any business, 1TB is enough. The intent of his argument is that a server designed correctly for it would run 25x faster.]
I wondered if there are any figures or measurements on Postgres performance in this ‘enough memory’ environment to support or contest this point of view?
Regards
David M Bennett FACS
_____
Andl - A New Database Language - andl.org
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Jonas Tehler | 2016-09-02 11:32:56 | Duplicate data despite unique constraint | 
| Previous Message | Amee Sankhesara - Quipment India | 2016-09-02 04:58:47 | How to reduce WAL files in Point in time recovery |