Quick Intro to Node.JS Microservices: Seneca.JS

What is microservices architecture?

These days, everyone is talking about microservices, but what is it really? Simply said, microservices architecture is when you separate your application into smaller applications (we will call them services) that work together.

I have found one awesome image that represents the difference between monolith and microservices apps:

I have found one awesome image that represent difference between monolith and microservice apps

Picture above explains it. On the left side we see one app that serves everything. It’s hosted on one server and it’s usually hard to scale. On the left side, we see the microservice architecture. This app has different service for each functionality. For example: one is for user management (registration, user profiles…), second one for emails (sends emails, customises templates..), third one for some other functionality.. These services communicate using API (usually send JSON messages) and they can be on same server. But the good thing is that they can be spread through different servers or Docker containers.

How can I use NodeJS to make microservices architecture?

So, you want to use NodeJS to create microservices architecture? That’s very simple and awesome!

In my career, I’ve used many frameworks and libraries for creating microservices architecture, even created custom libraries (don’t do it!) — until I found SenecaJS. To explain what Seneca is, I will quote the official website:

Seneca is a microservices toolkit for Node.js. It helps you write clean, organized code that you can scale and deploy at any time.

Simple! Basically, it helps you exchange JSON messages between your services and have good looking and readable codebase.

Seneca uses actions. There are action definitions and action calls. We can store our action definitions inside our services and call them from any service. For understanding how Seneca works, you need to think about modular pattern and avoid the desire to put everything inside one file.

I am going to show you how it works!

Let’s play!

For this tutorial, we are going to build a simple app. Yay!

First, let’s create simple a NodeJS app:

npm init

It will go through installation and install this:enter image description here

Then, we will install Seneca:

npm install seneca --save

It will install all modules we need and we can just require Seneca and use it.

Before we start, let me explain a couple more things to you. There aren’t any conventions about what we should put inside our JSON objects, but I have found out that lot of people use the same style. I am using this one {role:'namespace', cmd:'action'} and I recommend you to stick to this one. Creating a new style can lead to problems if you work in a team. Role is the name of group of functions and cmd is the name of the action. We use this JSON to identify which function we are going to use.

I will create two files, index.js and process.jsindex.js will send a request to process.js with some numbers, sum it up, then return result. The result will be written in console from index.js file. Sounds good? Let’s start!

files

Here is the code from process.js:

module.exports = function( options ) {
  var seneca = this;

  seneca.add( { role:'process', cmd:'sum' }, sum );

  function sum ( args, done ) {
    var numbers = args.numbers;

    var result = numbers.reduce( function( a, b ) { 
      return a + b; 
    }, 0);

    done( null, { result: result } );
  }
}

Here, we define the function sum and add to Seneca using the seneca.add function. The identifier is { role:'process', cmd:'sum' }sum calls the done function and sends the result object. The Seneca function returns objects by default, but it can be customized to return string, or number, too.

Finally, here is the code from index.js:

var seneca = require('seneca')();

seneca.use( './process.js' );

seneca.act( { role: 'process', cmd: 'sum', numbers: [ 1, 2, 3] }, function ( err, result ) {
  console.log( result );
} )

As you can see, we use seneca.use to tell Seneca that we are going to use process.jsfile and that we defined our function there. In the next lines, we use seneca.act to call the function from process.js. We are sending the JSON object with role and cmd, along with the arguments numbers. The object result is returned and it should contain our result. Let’s test it:

node index.js

Function result

Woohoo, it works! It returned { result: 6 } object and that’s what we expected!

Conclusion

Seneca is awesome, it has big a potential and you can create more complex apps with it. You can run multiple node processes and use the same services in process and a lot of other cool stuff. I will write more about this topic. Stay tuned!

I hope you liked this introduction and tutorial to Seneca.js! If you want to learn more about it, you can check out SenecaJS website and their API page: http://senecajs.org/api/.

Source: https://www.codementor.io/ivan.jovanovic/tutorials/introduction-to-nodejs-microservices-senecajs-du1088h3k

 

Node.js debugging with Chrome DevTools

Node.js debugging with Chrome DevTools (in parallel with browser JavaScript)

Serg Hospodarets BlogSeptember 29, 2016

Recently Paul Irish described how you can debug Node.js applications with Chrome DevTools.

Since that time Chrome DevTools have evolved and the step, where you had to open the separate page with a specific URL to debug the Node.js code, was removed.

It means, today you can debug your browser JavaScript files and Node.js ones in the same DevTools window in parallel, which makes the perfect sense.

Let’s take a look how it works.

What you need

1) Node.js 6.3+

You can install it directly from the Node.js site or switch to it using nvm (Node Version Manager)

Better to you use 6.6+ as Paul Irish mentioned in the comment, “in 6.4 there are a few flaky bugs”.

2) Chrome 55+

For that, you can download Chrome Canary

If you are using Atom, try this https://atom.io/packages/node-debugger

Enable a new way of Node.js debugging in Chrome

Currently the parallel debugging of the browser JavaScript and Node.js code is a new feature and now is qualified as an experiment.

To enable it, you have to do the following:

  • Open the chrome://flags/#enable-devtools-experiments URL
  • Enable the Developer Tools experiments flag
  • Relaunch Chrome
  • Open DevTools Setting -> Experiments tab (it started being visible after the reload)
  • Press "SHIFT" 6 times (enjoy it ¯ \ _ (ツ) _ / ¯) to show the hidden experiments
  • Check the "Node debugging" checkbox
  • Open/close DevTools

Debug

To start debugging, just open your page in Chrome and DevTools, as usually.

Start Node.js app

Start the Node.js in the debug mode. For that add the --inspect argument. E.g.:

node --inspect node.js

If you do this, you’ll see the output from Node.js, that it started in debug mode and an option to inspect the process opening a separate URL in Chrome:

alt

Debug in DevTools

But we want to debug it in parallel with our browser JavaScript, so switch back to your Chrome.

If you have any console.log or similar output in your Node.js application (outlined with blue on the previous image) , you can notice, that it already appeared in Chrome DevTools console:

After that, you can e.g. put breakpoints in both browser and Node JavaScript files and debug them.

I prepared a demo:

It contains a usual static server using Node.js (node.js file) and a page which just makes fetch requests to the pointed URL (the code is in the browser.js file).

You can try it to see how easily you can debug Node.js in Chrome. Just download the demo code from Github and run node --inspect node.js in its folder.

After that open http://localhost:8033/ in Chrome and so you can debug both browser.js and node.jsthe same time:

Other

As you see, you can put breakpoints in the Node.js code and output from Node.js apps goes to the DevTools console.

But there are many other abilities:

  • Live Edit: you can not only debug, but also change the files content
  • Profile JavaScript
  • Take snapshots etc.

Conclusions

If you use Node.js for your project, now you can debug and make changes for all your JavaScript from one place- Chrome DevTools.

You also can use all the power of Chrome DevTools applying it to Node.js code.

Source: https://blog.hospodarets.com/nodejs-debugging-in-chrome-devtools

Other sources for javascript debugging:

Debugging JavaScript:    https://developer.chrome.com/devtools/docs/javascript-debugging

NodeJS inbuilt debugger: https://nodejs.org/api/debugger.html

Node-inspector: https://github.com/node-inspector/node-inspector

Atom-node-debugger: https://atom.io/packages/node-debugger

HTTP Proxy – node.js

What is HTTP Proxy basically?

In computer networks, a proxy server is a server (a computer system or an application) that acts as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource available from a different server and the proxy server evaluates the request as a way to simplify and control its complexity. Proxies were invented to add structure and encapsulation to distributed systems.

Also called/used as “HTTP Proxy is basically a webaddress you type in to your companies proxy server so you can access the internet.”

What does it do?

Proxy server advantages include:

  • Maintaining identity anonymity as a security precaution.
  • Content control
  • Accelerating caching rates.
  • Facilitating access to prohibited sites.
  • Enforcing access policies on certain websites.
  • Allowing a site to make external server requests.
  • Avoiding security controls.
  • Bypassing Internet filtering for access to prohibited content.

So what it does in nodejs?

An HTTP proxy with custom routing logic that conditionally forwards to different ports based upon the requested host and path name.

Install http-proxy and url:

$ npm install http-proxy url

Then define the proxy as a node.js instance (we’ll name the file proxy-server.js):

var httpProxy = require("http-proxy");
var url = require("url");

httpProxy.createServer(function(req, res, proxy) {

  var hostname = req.headers.host.split(":")[0];
  var pathname = url.parse(req.url).pathname;

  // Options for the outgoing proxy request.
  var options = { host: hostname };

  // Routing logic
  if(hostname == "127.0.0.1") {
    options.port = 8083;
  } else if(pathname == "/upload") {
    options.port = 8082;
    options.path = "/"; 
  } else {
    options.port = 8081;
  }
  // (add more conditional blocks here)

  proxy.proxyRequest(req, res, options);

}).listen(8080);

console.log("Proxy listening on port 8080");

All incoming requests on port 8080 will be received by the proxy and considered by the routing logic. The url module is used to parse and extract thepathname, which is tested along with the hostname to determine the forwarding target.

Requests with 127.0.0.1 as the host name will be forwarded to localhost:8083. Requests that do not have 127.0.0.1 as the host but have the path/upload are forwarded to localhost:8082. Any other request will be forwarded the port localhost:8083.

You can add this to the bottom of proxy-server.js:

var http = require("http");

http.createServer(function(req, res) {
  res.end("Request received on 8081");
}).listen(8081);

http.createServer(function(req, res) {
  res.end("Request received on 8082");
}).listen(8082);

http.createServer(function(req, res) {
  res.end("Request received on 8083");
}).listen(8083);

Start the proxy by running in the terminal:

$ node proxy-server.js
Proxy listening on port 8080

Now assert the routing works as expected using your browser.

This should forward to the server listening on port 8081:

Request received on 8081

The /upload path will forward to 8082:

Request received on 8082

Entering 127.0.0.1 as the host name forwards to 8083:

Request received on 8083
 
 

why client and server separate folders in yo generator web apps

Supported Configurations

Client

  • Scripts: JavaScript, CoffeeScript, Babel
  • Markup: HTML, Jade
  • Stylesheets: CSS, Stylus, Sass, Less,
  • Angular Routers: ngRoute, ui-router

Server

  • Database: None, MongoDB
  • Authentication boilerplate: Yes, No
  • oAuth integrations: Facebook Twitter Google
  • Socket.io integration: Yes, No

when we should go for nodejs

To Say Generic,

  • is a command-line tool that can be run as a regular web server and lets one run JavaScript programs
  • utilizes the great V8 JavaScript engine
  • is very good when you need to do several things at the same time
  • is event-based so all the wonderful Ajax-like stuff can be done on the server side
  • lets us share code between the browser and the backend

Node.js is especially suited for applications where you’d like to maintain a persistent connection from the browser back to the server. Using a technique known as “long-polling”, you can write an application that sends updates to the user in real time. Doing long polling on many of the web’s giants, like Ruby on Rails or Django, would create immense load on the server, because each active client eats up one server process. This situation amounts to a tarpit attack. When you use something like Node.js, the server has no need of maintaining separate threads for each open connection.

This means you can create a browser-based chat application in Node.js that takes almost no system resources to serve a great many clients. Any time you want to do this sort of long-polling, Node.js is a great option.

Why cant we use old traditional methods in sending data like ajax,push etc etc

Sending Data

Nowadays, to send a request to a server, it is very common to use AJAX to send an XMLHTTPRequest to the server without any page reloads.

Polling

One way to solve this issue is by requesting the server periodically. This could be achieved by a simple setInterval function with an ajax call to the server every x seconds. This would be called polling, because we are constantly asking the server, ‘do you have anything new?’.

Push Technology – Event Based Programming

But what if the server wants to send the client new data that the client is interested in, but has no idea when the data would be available? This is sometimes referred to as push technology, where a server wants to send data to a client. A way to think about push technology is to only do something when there’s a reason. That is, the server will only send a data response when there’s data available.

Long Polling

Long polling is where the client requests new data from the server, but the server does not respond until there is data. In the meantime, the client has an open connection to the server and is able to accept new data once the server has it ready to send.

subscribe: function(callback) {
      var longPoll = function(){
          $.ajax({
              method: 'GET',
              url: '/messages', 
              success: function(data){
                  callback(data)
              },
              complete: function(){
                  longPoll()
              },
              timeout: 30000
          })
      }
      longPoll()
  }

When it comes to Nodejs:

On the server side, the request is then subscribed to the message bus.
In Nodejs, the message bus is built in and can be required as follows. And you’ll probably want to set max listeners to more than the default of 10.

var EventEmitter = require('events').EventEmitter  
var messageBus = new EventEmitter()  
messageBus.setMaxListeners(100)  

Subscring the request.

router.get('/messages', function(req, res){  
    var addMessageListener = function(res){
        messageBus.once('message', function(data){
            res.json(data)
        })
    }
    addMessageListener(res)
})

And so you’re wondering, when will this message bus ever fire back data to the response?
Since we have a subscriber to the event, we also need a publisher to cause an event.

To emit a message to the message bus, the client, or something else, will need to initiate a publish event.

publish: function(data) {  
    $.post('/messages', data)
}

And so the server can then emit a message to the message bus.

router.post('/messages', function(req, res){  
    messageBus.emit('message', req.body)
    res.status(200).end()
})

In this set up, you ensure that every client continually has an open connection to the server, but the server is not easily overloaded to a multitude of requests, because it puts every request/response inside a message bus to be emitted once it is ready.
This method has been widely popular due to its relatively easy set up and wide support accross all browsers.

 How does it work??
toptal-blog-1_B

A quick calculation: assuming that each thread potentially has an accompanying 2 MB of memory with it, running on a system with 8 GB of RAM puts us at a theoretical maximum of 4000 concurrent connections, plus the cost of context-switching between threads. That’s the scenario you typically deal with in traditional web-serving techniques. By avoiding all that, Node.js achieves scalability levels of over 1M concurrent connections (as a proof-of-concept).

There is, of course, the question of sharing a single thread between all clients requests, and it is a potential pitfall of writing Node.js applications. Firstly, heavy computation could choke up Node’s single thread and cause problems for all clients (more on this later) as incoming requests would be blocked until said computation was completed. Secondly, developers need to be really careful not to allow an exception bubbling up to the core (topmost) Node.js event loop, which will cause the Node.js instance to terminate (effectively crashing the program).

The technique used to avoid exceptions bubbling up to the surface is passing errors back to the caller as callback parameters (instead of throwing them, like in other environments). Even if some unhandled exception manages to bubble up, there are mutiple paradigms and tools available to monitor the Node process and perform the necessary recovery of a crashed instance (although you won’t be able to recover users’ sessions), the most common being the Forever module, or a different approach with external system toolsupstart and monit.

NPM: The Node Package Manager

When discussing Node.js, one thing that definitely should not be omitted is built-in support for package management using the NPM tool that comes by default with every Node.js installation. The idea of NPM modules is quite similar to that of Ruby Gems: a set of publicly available, reusable components, available through easy installation via an online repository, with version and dependency management.

A full list of packaged modules can be found on the NPM website https://npmjs.org/ , or accessed using the NPM CLI tool that automatically gets installed with Node.js.