HTTP/2 from scratch

What is HTTP/2?

HTTP/2 is a replacement for how HTTP is expressed “on the wire.” It is not a ground-up rewrite of the protocol; HTTP methods, status codes and semantics are the same, and it should be possible to use the same APIs as HTTP/1.x (possibly with some small additions) to represent the protocol.

The focus of the protocol is on performance; specifically, end-user perceived latency, network and server resource usage. One major goal is to allow the use of a single connection from browsers to a Web site.

It also transfer in binaries format instead of text.

The basis of the work was SPDY, but HTTP/2 has evolved to take the community’s input into account, incorporating several improvements in the process.

Please check my github project to understand in clear.

What are the key differences to HTTP/1.x?

At a high level, HTTP/2:

  • is binary, instead of textual
  • is fully multiplexed, instead of ordered and blocking
  • can therefore use one connection for parallelism
  • uses header compression to reduce overhead
  • allows servers to “push” responses proactively into client caches


HTTP/2 is comprised of two specifications:

  • Hypertext Transfer Protocol version 2 – RFC7540
  • HPACK – Header Compression for HTTP/2 – RFC7541

For more details:

Performance and optimization?

Have you ever noticed that when you surf the web some sites load much quicker than others. How fast any particular web page loads in your browser. A site that loads quickly has good performance. A site that makes you stare at a white screen for a long time has bad performance. The performance of a website relies on several factors like


Studies have shown that visitors will abandon a site in as little as three seconds if it does not load properly which impacts very badly on business.

As of July 2016, the average size of a web page is 2.4 megabytes.That means every time you visit a link you are likely downloading 2.4 megabytes of content of which only a small fraction actually matters.

screen-shot-2016-10-26-at-10-43-29-am Every bit of data costs money, and thus serving up a bloated website is wasting company’s money. If the average size of your web pages is 2.4 megabytes and you get a thousand visitors, you just pushed 2.4 gigabytes of data through the web. And if each of those visitors loads a couple of pages, that number is multiplied. 


Most web hosts operate with bandwidth levels, and once you exceed them, you pay a premium. Hence your site has to be easily search & findable which can be achieved by indexing.


Through indexing, your site will be ranked in google search which increase your company revenue. So all together, if you want your site to be found on search engines, making sure they are optimized for performance is an important step. 

So, what is performance?  how fast and effective your site loads in the visitor’s browser. And to build a website with great performance, we need to optimize everything within it, from the images, to the code and other elements, to how they are handled on the server, delivered through the network, and processed by the browser.


Where can we optimize?

Minifying/compressing css and js files are no longer required/supported by HTTP/2. Images need to be processed well and reduce in size along with responsiveness. Server push will take care of caching etc.

Before you startup and getting into project..

Gothru Postcss and NPM package manager

Lets see how DOM gets loaded sequentially when we have assets like HTML,CSS,Images and Javascript files. It starts with HTML and then it loads CSS,if you have multiple CSS then one after the other all CSS files get loaded and goes on the same with images , javascript files. Here DOM loads only when browser requested web server to load page with web assets using HTTP 1.1 protocol. Following image explains clearly how HTTP1.1 allows DOM to load all its assets.


This uses almost 6 TCP connections to load one after other like shown below


Now lets see how HTTP/2 handles the same..

Prior to that lets see what are requirements to convert to HTTP/2 protocol


By any chance, if any of above requirements fails, browser will use HTTP1.1 protocol by default with out any breaking. HTTP/2 encrypts the web traffic and also uses only HTTPS(you can use openSSL for free) and not HTTP. SPDY(google protocol) been introduced here in HTTP/2 for a better performance.

Lets see how HTTP/2 handles requests..


Using HTTP/2 browser can request and receive many different files at the same time and doesn’t have to wait for one file to finish before starting the download of the next one.

How to measure performance??

We only know checking performance in devtools in network like shown below.


HTTP1.1                                                                             HTTP/2

Also you can see the performance of your website in following links

Optimizing Images:

Images will take down the performance of a site as images loading consumes average of 70% downloading DOM assets.

Stable image formats

GIF – Small in size and never can be used

JPEG – Universal support, progressive loading, lossy compression and relatively low in size

PNG – lossless data compression, support transparency, complex png is greater in file size than JPEG.

When to use?? Computer generated graphics and image transparency


code based, rendered in the browser

style and manipulate using css and javascript

scales to any size or resolution

not universally supported. required PNG fallback


Do manual optimization using photoshop and make blur of unnecessary areas.


Lets date with code now..

You can check my project at this link and just clone it.

Once after you clone it, do npm install and then gulp –verbose to run our project.

Have you ever known that you can optimize your images using gulp? Seriously, i just heard now ;).

Try using gulp-image or gulp-imagemin modules to automate optimization of image.

I opened my project in cmd/terminal and installing gulp-image now.

npm install gulp-image –save-dev

Now go to gulp file and write the following code..


and now run the task “gulp imageoptim” in cmd to see all images and its folders in production folder with compressed format.

Code Optimization for HTTP2..


Automated Minification of HTML,JS and CSS:

We will use gulp-htmlmin – minify html

npm install gup-htmlmin –save-dev

include this in gulp file like

htmlmin = require(‘gulp-htmlmin’)


write the above code in gulpfile for task html and run “gulp html” in command prompt and verify index.html in production folder to see 1 line html code.

gulp-minify – minify javascript

do the same thing like we did earlier with

.pipe(minify()) and run “gulp javascript” which creates new minified files like flexslider-min.js etc. Now map these files in index.html to load faster.

cssnano – minify css

cssnano = require(‘cssnano’)

go to gulp css task and write the code like shown below and finally run gulp to run all tasks(html,javascript,css) and see how faster your site loads in browser.


Modularize CSS for HTTP/2

Instead of loading a gigantic css file, we will split into modules and load the module that is necessary for the page and also load the module that got recently updated.

Merged few css files and called up in index.html individually. Don’t get panic of multi css loads, we will necessary css files using http/2. For more info, just goto my github project and check my “modularize css” commit.

Deferring NonCritical CSS

what is defer? what is async?

Check here

To defer or async js files we have a process to achieve it. But for css files we do not have a simple process like how we do for js files. Hence we do like below.

In our scenario we delete stories and footer.css in index.html and place it in bottom of index.html before end of </body>tag with custom script that loads deferred styles. For more check my github commit

Can we load <link> stylesheets in body for better performance?? Yes by using HTTP/2 support. Generally DOM renders stops when it finds css files, to finish loading of all css files. This behavior will change in HTTP/2. For more see here.

Javascript loading using defer and async

First of all, HTTP/2 makes JavaScript concatenation pretty much unnecessary. You’ll remember, when a page loads in HTTP/1, any time the browser encounters a JavaScript, it stops rendering the page, then downloads the JavaScript file, then re-renders the page.

Lets see how javascript loads regular..


When the browser parses the page, it starts with the HTML and starts parsing it until it encounters the call for the JavaScript. Then it stops parsing the HTML while it downloads and then executes the JavaScript. Only after both the download and execution are complete does it continue parsing the HTML. And here you see why we usually place our JavaScripts at the bottom of the page, so we’re not blocking the HTML parsing until the JavaScript kicks in.


If we append the async attribute to our script call, the way the browser parses the content changes. Now the browser will continue to parse the HTML while the JavaScript is downloading, and the only time the parsing stops is when the JavaScript is executed.

So this improves the performance of the page quite dramatically. The problem with async is you don’t really have control over when the JavaScript will be executed. So that means, if you have a bunch of different scripts called in, and they’re all being called in asynchronously, they will just start executing once they’re finished downloaded, and if you have dependencies, you may end up having a file execute before its dependency has been loaded in the page, and that will cause problems.


The defer attribute kinda solves some of this, and it does it by completely deferring the execution of a JavaScript until everything else has happened.

So, we parse all the HTML, the JavaScript is downloaded whenever it’s encountered, and the execution only happens once the entire HTML is fully parsed. The other thing to know about defer is, when you use defer, the execution of the JavaScript will happen in the order the deferred elements are listed on the page. That means, if you list a bunch of different elements with defer, they will execute from the top down.


In above image, we used async for 4 scripts(which can load parallel while DOM renders) and defer for 1 script(which need to be load at last after DOM renders) and no async/defer for jQuery script.


Compress data using ZGIP

This depends on your hosting server. You may use apache/ngnx etc. You need configure things in .htaccess file in servers like explain below. – check this for multiple servers and its configurations

Cache files in browser

You can cache files like (css,js, any image type extensions) and set for a limit to expiration.


In Above image we can see that we cached static assets like css, js for 1 year and images for 1 month. Now it may strike in your mind, how my updated site will get load if i cache for 1 year? Yes, you may updated your site with images, new look by css, or new modules etc. So we have a way to reload specific files using  Cache-Busting.



We literally force the browser to download new files and we can do this in a really smart way by simply renaming the files any time we change them.

Now on the face of it that sounds really complicated because we don’t want to rename style.cssto style version one and style version two and so on, but we can use something called a file hash to do this. A file hash is a unique number that’s generated from the contents of a file. And that means anytime you change a file the file hash will change as well and gets a new name and will be brought into the browsers. There are simple tools to achieve this functionality.


As you got to know file-hash, we will name each file with some hashtag and will be cached in browser. If you make changes in style-123456.css, it gets rename and when browser hits server it feels like a new file been added and will be cached immediately for 1 year like shown in below.


We have tools to achieve this functionality and those are

gulp-rev – This tool automatically appends hashes to the end of file names.

gulp-rev-replace – which goes into index.html and any other HTML files, or other files, finds references to the files that were just hashed and changes them so that we don’t have to manually change these files.

 rev-del – to go through the manifest and figure out which files are the currently updated versions and delete all the ones that are not so we don’t end up with a ton of different hashed versions of our files in our storage.

Lets see how we will use it in gulp file..

Lets install and include all these 3 modules in gulp file. Now, i am creating a folder named ‘limbo’ as a temp folder to move all changes in files(html,css,js) to this temp before it renames files and from that we will move to production after renaming files(revision and deletions of old files using gulp-rev-replace and rev-del)

Now run gulp to see changes like below..limbo folder and hashtag added to files.


check rev-manifest.json to find the hashtag mappings for the file changes…


Now lets test it, just go to development folder->CSS->header.css and change color value in .masthead and don’t save it immediately. Goto production folder->CSS-> and verify the folder name carefully. It should be replaced from header-123456.css to header-321342.css after saving of header.css file.

By this we can do cache-busting to replace updated file in browser.

To understand more..check my github commit.

Server Push


For server push, you have to configure the files you have to load in advance in .htaccess file of your hosting server. Set link in response header to push all the files you specified in link.


Even you can do the same server push by changing your index.html to index.php and appending the following code in it.


function push_to_browser($as, $uri) {
header(‘Link: ‘ . $uri . ‘; rel=preload; as=’ . $as, false);

$assets = array(
‘<//,400i,700,700i,900,900i>’ => ‘style’,
‘</style-main.css>’ => ‘style’,
‘/CSS/header.css’ => ‘style’,
‘<//>’ => ‘script’,
‘</JS/libs/jquery.flexslider-min.js>’ => ‘script’,
‘</images/mainpromo/welcome01-1600.jpg>’=> ‘image’

array_walk( $assets, ‘push_to_browser’)


Note: Please do install php in your machine along with npm module (php2html) in gulp file. Also use it in html task. For more details, please check my github commit.

Leverage CDN’s for performance


You can also check this

Now lets try adding SSL certificates to it and enable https..



Now, enable https in gulp webserver like shown below..


Lets run gulp now to see the following


Click proceed and see your project runs with https. For more check my github commit.

Implementation of http2/SPDY protocol referrence took from




HTTP/2 – The Rocket Booster of HTTP


What is a protocol?

You can think of a protocol as a collection of rules that govern how information is transferred from one computer to another. Each protocol is a little different, but usually they include a header, payload and footer. The header contains the source and destination addresses and some information about the payload (type of data, size of data, etc.). The payload contains the actual information, and the footer holds some form of error detection. Some protocols also support a feature called “encapsulation,” which lets them include other protocols inside of their payload section.


Introduction To HTTP/2:


Hypertext Transfer Protocol, the mechanism a browser uses to request information from a server and display webpages on your screen. A new version of the reliable and ubiquitous HTTP protocol was recently published as a draft by the organization in charge of creating standards for the internet, the Internet Engineering Task Force (IETF). This means that the old version, HTTP/1.1, in use since 1999, will eventually be replaced by a new one, dubbed HTTP/2. This update improves the way browsers and servers communicate, allowing for faster transfer of information while reducing the amount of raw horsepower needed.

So what it is exactly?? Will this re-write protocol?

It is not a ground-up rewrite of the protocol; HTTP methods, status codes and semantics are the same, and it should be possible to use the same APIs as HTTP/1.x (possibly with some small additions) to represent the protocol.

The focus of the protocol is on performance; specifically, end-user perceived latency, network and server resource usage. One major goal is to allow the use of a single connection from browsers to a Web site.

By Whom??

HTTP/2 is being developed by the Hypertext Transfer Protocol working group (httpbis, where bis means “repeat” or “twice”) of the Internet Engineering Task Force. HTTP/2 would be the first new version of HTTP since HTTP 1.1, which was standardized in RFC 2616 in 1999. The Working Group presented HTTP/2 to IESG for consideration as a Proposed Standard in December 2014, and IESG approved it to publish as Proposed Standard on Feb 17, 2015.


Origin From??

This standardization effort came as an answer from SPDY, an HTTP-compatible protocol developed by Google and supported in all major browsers.
A working group has been developing HTTP/2 since 2012 and adopted Google’s SPDY protocol as an initial blueprint, with community feedback resulting in “substantial changes” to the standard, such as the compression scheme and the format of protocol.

 HTTP/2 protocol


After more than two years of discussion, over 200 design issues, 17 drafts, and 30 implementations, the HTTP/2 and HPACK specifications have now been approved by the IETF’s steering group.

A key point in the protocol development process was the iteration the working group did between protocol updates, and implementations and testing. Certain draft protocol versions were labelled by the working group as “implementation drafts”, and the participants — many web browser and web server providers — updated their implementations and tested out the protocol changes. The result is a thoroughly validated protocol that has been shown to interoperate and that meets the needs of many major stakeholders.

Why is this important?

HTTP/1.1 has been in use since 1999, and while it’s performed admirably over the years, it’s starting to show its age. Websites nowadays include many different components besides your standard HTML, like design elements (CSS), client-side scripting (JavaScript), images, video and Flash animations. To transfer that information, the browser has to create several connections, and each one has details about the source, destination and contents of the communication package or protocol. That puts a huge load on both the server delivering the content and your browser.


All those connections and the processing power they require can lead to slowdowns as more and more elements are added to a site.

People have been searching for ways to speed up the internet since the days when dial-up and AIM were ubiquitous. One of the more common techniques is caching, where certain information is stored locally as opposed to transferring everything anew each time it’s requested. But others have resorted to tricks like lowering the resolution of images and videos; still others have minfied the sources using grunt/gulp.  These options are useful, but are really just Band-Aids. So Google decided to dramatically overhaul HTTP/1.1 and create SPDY.

What is SPDY??

SPDY (pronounced “SPeeDY”) is a networking protocol whose goal is to speed up the web. SPDY augments HTTP with several speed-related features that can dramatically reduce page load time:

  • SPDY allows client and server to compress request and response headers, which cuts down on bandwidth usage when the similar headers (e.g. cookies) are sent over and over for multiple requests.
  • SPDY allows multiple, simultaneously multiplexed requests over a single connection, saving on round trips between client and server, and preventing low-priority resources from blocking higher-priority requests.
  • SPDY allows the server to actively push resources to the client that it knows the client will need (e.g. JavaScript and CSS files) without waiting for the client to request them, allowing the server to make efficient use of unutilized bandwidth.

The goal of SPDY is to reduce web page load time. This is achieved by prioritizing and multiplexing the transfer of web page subresources so that only one connection per client is required.



A short video to better understand SPDY:

Goals of  HTTP/2:

Its many benefits include:

  • Multiplexing and concurrency:Several requests can be sent in rapid succession on the same TCP connection, and responses can be received out of order — eliminating the need for multiple connections between the client and the server.
  • allows loading page elements in parallel over a single TCP-connection
  • avoids the head-of-line blocking
  • makes it possible to transfer data simultaneously using multiple threads.
  • Stream dependencies:the client can indicate to the server which of the resources are more important than the others
  • Header compression:HTTP header size is drastically reduced
  • Server push:The server can send resources the client has not yet requested.
  • Improved Security

Differences from HTTP 1.1

HTTP/2 leaves most of HTTP 1.1’s high level syntax, such as methods, status codes, header fields, and URIs, the same. The element that is modified is how the data is framed and transported between the client and the server.

Websites that are efficient minimize the number of requests required to render an entire page by minifying (reducing the amount of code and packing smaller pieces of code into bundles, without reducing its ability to function) resources such as images and scripts. However, minification is not necessarily convenient nor efficient, and may still require separate HTTP connections to get the page and the minified resources. HTTP/2 allows the server to “push” content, that is, to respond with data for more queries than the client requested. This allows the server to supply data it knows a web browser will need to render a web page, without waiting for the browser to examine the first response, and without the overhead of an additional request cycle

Why is HTTP/2 better?

In a few words: HTTP/2 loads webpages much faster, saving everyone time that otherwise would go to waste. It’s as simple as that.

The example below, published by the folks over at HttpWatch, shows transfer speeds increasing more than 20 percent


Example of HTTP page load speed (above) against HTTP/2 (below)


HTTP/2 improves speed mainly by creating one constant connection between the browser and the server, as opposed to a connection every time a piece of information is needed. This significantly reduces the amount of data being transferred. Plus, it transfers data in binary, a computer’s native language, rather than in text. This means your computer doesn’t have to waste time translating information into a format it understands.

Can I try HTTP/2??

Internet Explorer on Windows 10 Technical Preview (http2 draft-14 ALPN), but you can also use Mozilla and chrome developer editions.

Before you start double check, that ‘Use HTTP2’ option is enabled in the browser settings.


Chrome and Mozilla Outlines roadmap for HTTP/2

Google outlined its plans, namely adopting HTTP/2 in the coming weeks with Chrome 40.

The current Firefox 35 version uses a draft ID of h2-14 and uses it to negotiate with

Firefox 36, currently in beta, will support the official final “h2″ protocol for negotiation next week.

Get to Know More: