What is a protocol?
You can think of a protocol as a collection of rules that govern how information is transferred from one computer to another. Each protocol is a little different, but usually they include a header, payload and footer. The header contains the source and destination addresses and some information about the payload (type of data, size of data, etc.). The payload contains the actual information, and the footer holds some form of error detection. Some protocols also support a feature called “encapsulation,” which lets them include other protocols inside of their payload section.
Introduction To HTTP/2:
Hypertext Transfer Protocol, the mechanism a browser uses to request information from a server and display webpages on your screen. A new version of the reliable and ubiquitous HTTP protocol was recently published as a draft by the organization in charge of creating standards for the internet, the Internet Engineering Task Force (IETF). This means that the old version, HTTP/1.1, in use since 1999, will eventually be replaced by a new one, dubbed HTTP/2. This update improves the way browsers and servers communicate, allowing for faster transfer of information while reducing the amount of raw horsepower needed.
So what it is exactly?? Will this re-write protocol?
It is not a ground-up rewrite of the protocol; HTTP methods, status codes and semantics are the same, and it should be possible to use the same APIs as HTTP/1.x (possibly with some small additions) to represent the protocol.
The focus of the protocol is on performance; specifically, end-user perceived latency, network and server resource usage. One major goal is to allow the use of a single connection from browsers to a Web site.
HTTP/2 is being developed by the Hypertext Transfer Protocol working group (httpbis, where bis means “repeat” or “twice”) of the Internet Engineering Task Force. HTTP/2 would be the first new version of HTTP since HTTP 1.1, which was standardized in RFC 2616 in 1999. The Working Group presented HTTP/2 to IESG for consideration as a Proposed Standard in December 2014, and IESG approved it to publish as Proposed Standard on Feb 17, 2015.
This standardization effort came as an answer from SPDY, an HTTP-compatible protocol developed by Google and supported in all major browsers.
A working group has been developing HTTP/2 since 2012 and adopted Google’s SPDY protocol as an initial blueprint, with community feedback resulting in “substantial changes” to the standard, such as the compression scheme and the format of protocol.
After more than two years of discussion, over 200 design issues, 17 drafts, and 30 implementations, the HTTP/2 and HPACK specifications have now been approved by the IETF’s steering group.
A key point in the protocol development process was the iteration the working group did between protocol updates, and implementations and testing. Certain draft protocol versions were labelled by the working group as “implementation drafts”, and the participants — many web browser and web server providers — updated their implementations and tested out the protocol changes. The result is a thoroughly validated protocol that has been shown to interoperate and that meets the needs of many major stakeholders.
Why is this important?
All those connections and the processing power they require can lead to slowdowns as more and more elements are added to a site.
People have been searching for ways to speed up the internet since the days when dial-up and AIM were ubiquitous. One of the more common techniques is caching, where certain information is stored locally as opposed to transferring everything anew each time it’s requested. But others have resorted to tricks like lowering the resolution of images and videos; still others have minfied the sources using grunt/gulp. These options are useful, but are really just Band-Aids. So Google decided to dramatically overhaul HTTP/1.1 and create SPDY.
What is SPDY??
SPDY (pronounced “SPeeDY”) is a networking protocol whose goal is to speed up the web. SPDY augments HTTP with several speed-related features that can dramatically reduce page load time:
- SPDY allows client and server to compress request and response headers, which cuts down on bandwidth usage when the similar headers (e.g. cookies) are sent over and over for multiple requests.
- SPDY allows multiple, simultaneously multiplexed requests over a single connection, saving on round trips between client and server, and preventing low-priority resources from blocking higher-priority requests.
The goal of SPDY is to reduce web page load time. This is achieved by prioritizing and multiplexing the transfer of web page subresources so that only one connection per client is required.
A short video to better understand SPDY: https://www.youtube.com/watch?v=WkLBrHW4NhQ
Goals of HTTP/2:
Its many benefits include:
- Multiplexing and concurrency:Several requests can be sent in rapid succession on the same TCP connection, and responses can be received out of order — eliminating the need for multiple connections between the client and the server.
- allows loading page elements in parallel over a single TCP-connection
- avoids the head-of-line blocking
- makes it possible to transfer data simultaneously using multiple threads.
- Stream dependencies:the client can indicate to the server which of the resources are more important than the others
- Header compression:HTTP header size is drastically reduced
- Server push:The server can send resources the client has not yet requested.
- Improved Security
Differences from HTTP 1.1
HTTP/2 leaves most of HTTP 1.1’s high level syntax, such as methods, status codes, header fields, and URIs, the same. The element that is modified is how the data is framed and transported between the client and the server.
Websites that are efficient minimize the number of requests required to render an entire page by minifying (reducing the amount of code and packing smaller pieces of code into bundles, without reducing its ability to function) resources such as images and scripts. However, minification is not necessarily convenient nor efficient, and may still require separate HTTP connections to get the page and the minified resources. HTTP/2 allows the server to “push” content, that is, to respond with data for more queries than the client requested. This allows the server to supply data it knows a web browser will need to render a web page, without waiting for the browser to examine the first response, and without the overhead of an additional request cycle
Why is HTTP/2 better?
In a few words: HTTP/2 loads webpages much faster, saving everyone time that otherwise would go to waste. It’s as simple as that.
The example below, published by the folks over at HttpWatch, shows transfer speeds increasing more than 20 percent
Example of HTTP page load speed (above) against HTTP/2 (below)
HTTP/2 improves speed mainly by creating one constant connection between the browser and the server, as opposed to a connection every time a piece of information is needed. This significantly reduces the amount of data being transferred. Plus, it transfers data in binary, a computer’s native language, rather than in text. This means your computer doesn’t have to waste time translating information into a format it understands.
Can I try HTTP/2??
Internet Explorer on Windows 10 Technical Preview (http2 draft-14 ALPN), but you can also use Mozilla and chrome developer editions.
Before you start double check, that ‘Use HTTP2’ option is enabled in the browser settings.
Chrome and Mozilla Outlines roadmap for HTTP/2
Google outlined its plans, namely adopting HTTP/2 in the coming weeks with Chrome 40.
The current Firefox 35 version uses a draft ID of h2-14 and uses it to negotiate with google.com.
Firefox 36, currently in beta, will support the official final “h2″ protocol for negotiation next week.
Get to Know More: