| From: | Robert Haas <robertmhaas(at)gmail(dot)com> | 
|---|---|
| To: | Craig Ringer <craig(at)2ndquadrant(dot)com> | 
| Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Damir Simunic <damir(dot)simunic(at)wa-research(dot)ch>, Vladimir Sitnikov <sitnikov(dot)vladimir(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> | 
| Subject: | Re: Proposal: http2 wire format | 
| Date: | 2018-05-10 20:37:03 | 
| Message-ID: | CA+TgmoZ-6=GNXxmDzs=8cgvYM2sKPA4W-go9z5sBMNW2O=qX=w@mail.gmail.com | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-hackers | 
On Mon, Mar 26, 2018 at 7:51 PM, Craig Ringer <craig(at)2ndquadrant(dot)com> wrote:
> There's been no visible consideration of overheads and comparison with
> existing v3 protocol. Personally I'm fine with adding some protocol overhead
> in bytes terms; low latency links have the bandwidth not to care much
> compared to payload sizes etc. On high latency links it's all about the
> round trips, not message sizes. But I want to know what those overheads are,
> and why they're there.
I think that the overhead of any new protocol (or protocol version)
ought to be a major consideration.  Overhead includes, but is not
limited to, number of bytes sent over the wire.  It also includes how
fast we can parse that protocol; Andres's earlier comments on this
thread abut Parse/Bind/Execute being slower than Query are on point.
If we implement a new protocol, we should measure how many QPS we can
push through it (for both prepared and unprepared queries).
-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Robert Haas | 2018-05-10 20:41:53 | Re: Why does load_external_function() return PGFunction? | 
| Previous Message | Robert Haas | 2018-05-10 20:10:18 | Re: [HACKERS] Cutting initdb's runtime (Perl question embedded) |