To HTTP2 or not to

First published in Trustpilot's tech blog


Here’s a non-obvious metaphor about moving from one protocol to another… Photo by Martin Förster on Unsplash

If you are a web developer, you are likely to have heard of HTTP2. You heard how much faster it is, how it allows you to do as many requests as you want, how you should switch to it as soon as possible. Other things you don’t hear so much about. How does it work? Does it make sense to switch from HTTP1.1, in your particular case? What benefits will you get, and how?


If you are building a website with something like AWS Cloudfront, and all you want to know is which version to use, the answer is HTTP2. You might not see any significant benefits, but it won’t be worse either.

HTTP2 uses the same interface as HTTP1.1. The difference is how data gets sent down the wire. The main benefit is that it allows us to send many requests in the same TCP connection, in parallel, in a binary format. Many other optimizations derive from this.

To understand HTTP2, we must first look at how the Internet and its protocols work.


This is meant to make you think of networks and packets. Photo by Denys Nevozhai on Unsplash

A primer on network protocols

The base of how the Internet works is the Internet Protocol (IP). It defines how a computer sends information to another through a network.

The Transport Connection Protocol (TCP) is implemented over it. It gives us an abstraction of a reliable network. Without TCP, your application would have to deal with lost and corrupted packets, a slow router blocking traffic somewhere, etc..

TCP adds several checks to make sure your data transmission is as accurate as possible, at the cost of speed. A TCP connection starts with an handshake between the client and the server, which requires a full round trip. The connection is established, followed by a slow-start phase. The number of packets sent is progressively ramped up, as they are successfully delivered. This avoids congestion in a slow network.

HTTP, or the Hypertext Transfer Protocol, was implemented over TCP to make it easy to… well, transfer hypertext. This was initially limited to HTML (Hypertext Markup Language),but these days it can be pretty much anything. It abstracts away TCP and it’s packets, leaving you with the all too familiar GET, POST, PUT, DELETE methods, and so on.

One TCP connection to rule them all

In HTTP1.0, every single HTTP request opens a new TCP connection, and closes it after getting a response. This means that connections rarely get out of the slow-start phase. HTTP1.1 introduces keep-alive, allowing a TCP connection to be reused for several HTTP requests. Keep in mind that you can’t send requests in parallel — you only send a new request once you get a response for the previous one. One slow request can slow down everything. To make requests in parallel, you need to open multiple TCP connections, and most browsers limit you to 6 connections per origin. This lead to some of the most common optimizations in HTTP1.1:

  • Domain sharding is used to separate requests in different subdomains. This overcomes the 6 connections per origin restriction, and you can make as many parallel requests as you want.
  • Code bundling and image spriting allow you to make fewer requests.

No matter what, in HTTP1.1, you are subjected to TCP’s slow-start phase in many requests, and a slow endpoint can block other requests.

HTTP2 solves this problem by using a single TCP connection per origin, and allowing many requests and responses to be on the wire simultaneously. This is called multiplexing.

Streams, messages and frames (The binary framing layer)

HTTP2 has a new mechanism called the binary framing layer. This mechanism will break requests and responses in frames: a small, binary encoded unit.

Frames travel within a stream, which is simply a bidirectional flow of bytes, carrying requests and responses. There can be multiple streams in a connection, depending on the available bandwidth.

A sequence of frames forms a message, which is basically a request or a response.


Containers are like frames, each row is like a stream… ah nevermind, it just looks pretty. Photo by chuttersnap on Unsplash

Streams can have different priorities so that essential assets are delivered before others. However, lower priority frames can be delivered while an higher priority one is processed. In HTTP1.1, the connection is left in an idle state until the high priority asset is delivered.

Getting more than you asked for with server push

Until now, HTTP imposed a one to one relation between requests and responses. With HTTP2, one to many relations are possible. If the server knows that the client will need a few other assets besides the one requested, it can send them immediately. This will save a round trip and the parsing time needed for the client to discover and request the second asset.

In practice, server push is equivalent to inlining assets in HTML. Yet, it reduces complexity in your application or build system, and allows caching and reuse of assets across different pages. It can also be more efficient, since inline base64 encoded images have a 30% larger byte size than the regular image.


A lot of trains. Just like all the responses you’ll get. Photo by Campbell Boulanger on Unsplash

Smaller cookies for you with header compression

The first HTTP2 frame sent in every message is the headers frame. HTTP1.1 transmits the headers in plain text. HTTP2 switches to binary using a new compression method called HPACK.

HPACK takes advantage of the fact that we know most possible headers beforehand. It replaces each header with an index, and both the client and the server have lookup tables to rebuild the headers when received.

This might seem like a minor improvement. Nonetheless, it is not uncommon that the headers are larger than the response body. Bear in mind that every request includes all your cookies in the headers.


I know I’m overdoing the cookies joke, and I don’t care. Photo by Nathan Dumlao on Unsplash

Are your users and your code ready for HTTP2?

If you are a web developer, HTTP2 won’t change your workflow at all. It maintains the same API as HTTP1.1 and extends it. The breaking changes leading to a new version happen in the transportation layer. It is the client (usually a browser) and the server that must implement a very different way to encode and decode the messages, and send them down the wire.

Modern browsers only support HTTP2 over TLS(https) and use the TLS handshake to define which protocol they will be using, HTTP1.1 or 2. If you support any version of IE, clients that aren’t browsers, or expose a public API, your server should be able to respond to HTTP1.1 requests too.

After upgrading to HTTP2

Some HTTP1.1 optimizations no longer make sense in HTTP2.

If you upgrade and keep using domain sharding, you will get a TCP connection for each origin. This eliminates a lot of the benefits of multiplexing.

If your server setup allows you, simplify your code using server push instead of inlining resources.

Lastly, consider reducing code bundling / concatenation. Concatenation can be a bit tricky with HTTP2. On one hand, multiplexing eliminates the network costs of having many requests. Several, more specific assets, allow more efficient caching and reuse. At the same time, a gzipped, bundled asset, can be much smaller than the sum of the unbundled assets. This depends on how much repetition the file has. There are also benefits to having all your JS parsed ahead of time. Those benefits can be much larger than what you would gain with caching. You’ll simply have to test and measure to find the right amount of concatenation.

Wrapping up

HTTP2 has a lot to it. Ultimately, it allows you to write simpler, better performing applications. It also makes sure that we use the available bandwidth in a more efficient way. It improves the Internet for everyone.

Wanna learn more? Check out these resources: