The third and upcoming major version of the Hypertext Transfer Protocol is all set to shake things up. Here’s a primer to bring you up to speed.
We have heard a lot about the new HTTP version that debuted a while ago, but ever wondered what it is and why we need such a significant upgrade? HTTP/2 has always been exceptionally reliable when it comes to data transfer between multiple systems.
But what went wrong with it?
A little history: HTTP2 and HTTP3
HTTP/2 was a significant upgrade from the traditional application layer network protocol — HTTP — which sat on top of the OSI model and became a widely used protocol to transfer from one system to another.
An HTTP protocol is a level on top of TCP/IP, enabling data transfer over a network of computers or systems. It does this by simply creating a TCP connection — known as a handshake. The core principle of a TCP connection is to maintain a reliable agreement that a connection has been made over systems to start transmitting packets while preventing packet loss.
HTTP/1.1, unlike HTTP/2’s single TCP connection, is used to create multiple TCP connections, which was one of the reasons for its increasing latency. In addition, every handshake was followed by a TLS every time a new relationship formed — making it a slow start for creating a connection.
With the lack of multiplexing to serve parallel requests, the strong dependency on the transport layer makes it difficult for flow or congestion control.
The supported HTTP/2 protocol addressed this when the client and web server controlled the flow check and the buffer size to avoid congestion.
There were other upgrades, too, like better header compression for good bandwidth, but the best one was creating streams of multiple HTTP connections that can reuse the same TCP connection without another handshake. This made it a reliable and fast network protocol to transfer a good amount of data across web pages. It could consistently tackle multiple requests without slowing down the page load.
Where did it fail?
Even though it could reuse the TCP connection, the abstract nature of HTTP over TCP made it unknown to the stream. This risked it failing due to any packet of data loss in TCP, creating a blocker for all the streams connected to that TCP connection.
These blockers created a situation called Head of Line Blocking — which occurred due to the peculiar nature of TCP that guaranteed that every packet sent in a given sequence will be received in the same series and will provide acknowledgment for the same. Thus, TCP tried to fill in the gap with lost data by blocking all the other connections.
Can data loss be prevented?
Although we know the leading cause of the issue, we can’t prevent it because there is no perfect network with ideal bandwidth. So by changing the bandwidth from network to network and with mobile devices switching networks frequently — data loss is inevitable and expected.
So how do we solve this?
The HTTP/3 protocol came up with an alternative to the peculiar behavior of TCP, which is UDP; this protocol doesn’t expect any acknowledgment or guarantee any order, hence no blocking.
On top of this is an advanced protocol called QUIC, which is like an upgraded version of TCP that can now access the streams built by HTTP/2, making it genuinely parallel, yet it also provides inbuilt TLS 1.3 for encrypting data over an internet connection.
QUIC streams have solved the issues that made TCP slow. To list down its achievements:
Reduce the latency of TLS by providing inbuilt encryption.
It used UDP to overcome the blocking of streams due to data loss.
It is more likely to come with a feature for error-correcting on data loss.
This change improved its performance and convinced Big Tech firms like Google (Google Chrome) and Facebook to deploy it across their platforms.