nodejs

Dependency injection with Node.js

In the last project I was working on I had the chance to apply some dependency injection patterns on a node.js application.
Before I get into the details of the implementation it is important to understand how using dependency injection could benefit your project.

Wikipedia’s definition

Dependency injection is a software design pattern that allows removing hard-coded dependencies and making it possible to change them, whether at run-time or compile-time.[1]

This can be used, for example, as a simple way to load plugins dynamically or to choose stubs or mock objects in test environments vs. real objects in production environments. This software design pattern injects the depended-on element (object or value etc) to the destination automatically by knowing the requirement of the destination. Another pattern, called dependency lookup, is a regular process and reverse process to dependency injection.

Basically, dependency injection gives you the flexibility to separate the modules functionality from it’s dependencies.
This decoupling can come in handy during testing or even when you find yourself in the need to modify some dependencies of a module later on.

Creating the module

Lets look at how you would be able to implement some dependency injection patterns with node.

I’m going to use the WebVirt project to show some examples in action.

The code blow represents a single controller that manages some express routes:

 
 var VirtController = function (di) {

};

VirtController.prototype.actions = function (req, res) {

};

VirtController.prototype.hostStats = function (req, res) {

}

VirtController.prototype.list = function (req, res) {

};

module.exports.inject = function(di) {  
 if (!_virtController) {  
 virt = di.virtModel  
 Step = di.Step;  
 _ = di._;  
 logger = di.logger;  
 _virtController = new VirtController(di.config.logger);  
 }

 return _virtController;  
 }  

The controller has three basic methods:

  • actions
  • hostStats
  • list

However, only the inject method is exported.
That’s the only entry point of the module, you can perform some validation, initialization procedures, anything that needs to be done before you instantiate the module.

In the example above we only check if an instance was already created so we don’t create two equal objects, applying the Singleton pattern.

Injecting dependencies

To use the module all we need to do is to “inject” the dependencies and receive back the initialized instance:

  
 // Load dependencies  
 var _ = di._ = require("underscore");  
 di.Step = require(‘../../external/step/lib/step.js’);  
 di.exec = require(‘child_process’).exec;  
 di.config = config = require(‘../../config/config.js’);  
 di.logger = logger = require(‘../../utils/logger.js’);

exports.virtModel = di.virtModel = require("./models/virt-model.js").inject(di);

exports.virtController = virtController = require("./controllers/virt-controller").inject(di);

One of the major benefits we gained by applying dependency injection into our project was that it gave us the flexibility to quickly identify what the module needed to operate on, and if any changes were needed we could quickly patch them.
For example;
The WebVirt project is composed of two different pieces, the WebVirt-Manager and the WebVirt-Node.
They are separate modules that share the same code base but are designed to run on different hosts. Each one of them have specific dependencies.
The WebVirt-Manager requires Redis to store the users of the system as well other bits of data.
However the WebVirt-Node does not need Redis.
That posed a huge problem since both apps were sharing the same code base and we were using a Logger module that was saving the logs to a Redis db.
And only the WebVirt-Manager host had a Redis db running.

To fix this problem we passed a “Custom Logger” to the WebVirt-Node.
Instead of requiring the Logger that was talking with the Redis db, we passed a Logger that only logged stuff to the console.

  
 // Load dependencies  
 var _ = di._ = require("underscore");  
 di.Step = require(‘../../external/step/lib/step.js’);  
 di.exec = require(‘child_process’).exec;  
 di.config = config = require(‘../../config/config.js’);  
 var logger = {  
 error: function (err, metadata) {  
 console.log("err: ", err);  
 console.log("medatata: ", metadata);  
 }  
 }  
 di.logger = logger;

exports.virtModel = di.virtModel = require("./models/virt-model.js").inject(di);

exports.virtController = virtController = require("./controllers/virt-controller").inject(di);

And by just changing a few lines of code we were able to modify the modules dependencies without altering its source nor functionality.

Node.js real time logging with Winston, Redis and Socket.io, p2

Following up on my last blog post Node.js real time logging with Winston, Redis and Socket.io, p1 I want to get into the integration of winston with Socekt.io to stream logs real time to a browser.

So just a quick recap, the idea here is to have a mechanism that logs messages categorized by different levels in the server and displays them at the same time in the browser, keeping the user informed at all times of the action in the back end, or helping a developer spot some bugs without the need to keep an eye on the terminal console or searching for log files buried in the server.

Socket.io

So first we need to initialize the socket.io lib.
This part could be done several different ways, I haven’t found a baseline to follow when initializing and sharing a socket.io handler on express, so if anybody knows please hit me up in the comments.
Anyway, the approach I decided to take was:

  1. Initialize the logger
  2. Register an event listener on the logger instance.
  3. Start the express http server
  4. Start the socket.io server
  5. Fire event on the logger instance

[sourcecode language=”javascript”]
// Create logger
var di = {};
di.config = require(‘./config/config.js’);
var logger = require(‘./utils/logger.js’).inject(di);

// Start listening for socket event on the logger
logger.on("socket", function () {
this.socketIO = loggerSocket;
});

// Create && Start http server
var server = http.createServer(app);
server.listen(app.get(‘port’), function() {
console.log("Express server listening on port " + app.get(‘port’));
});

// Create socket.io connection
var io = require(‘socket.io’)
console.log("creating socket connection");
var loggerSocket = io.listen(server, {log: false}).of(‘/logger’);

loggerSocket.on(‘connection’, function(socket){
socket.join(socket.handshake.sessionID);
// Emit event to logger
logger.emit("socket");
});
[/sourcecode]

As you can see in the code snippet above, all the dependencies for the logger are saved in an object and injected on the module, in the case it only depends on config.js.
And since the logger is a singleton, all other modules that require the logger will get an already initialized instance.

After we get a handle on the logger, we start listening for the ‘socket’ event, the name could be anything since we are firing the event later in the code. The reason behind this event is that we can to grab a hold of the socket connection and save it inside the logger so we can start streaming logs once they are generated.
We could simply set the reference to socketIO on the logger inside the connection event for the socket, however, by decoupling the socket connection with the logger socket.io handler initialization gives us the flexibility to move things around to different places.

Last, we start the http and socket.io server and fire a socket event whenever the socket.io finishes connecting.

Streaming logs with winston

Now that the logger has a handle of the socket.io connection it can start streaming logs to the browser in real time.

[sourcecode language=”javascript”]
var CustomLogger = function (config) {

….
….

winston.stream({ start: -1 }).on(‘log’, function(log) {
var type = log.transport[0];
if (self.socketIO && type === "redis") {
console.log("n**emitting socket msg");
self.socketIO.emit("newLog", log);
}
});
}
[/sourcecode]

In the logger constructor we initialize the winston stream which listens for all new logs added to different Transports.
That’s why we check for the redis Transports specifically before emitting the log  with socket.io, since we don’t want to emit repeated logs.

Displaying logs on the client

Looking at the client side code

[sourcecode language=”javascript”]
// Create socketIO connection
this.logger = io.connect(‘/logger’);
// Incoming log via socket connection
this.logger.on(‘newLog’, function (data) {
var log = new app.Log({
‘timestamp’ : data.timestamp,
‘file’ : data.file,
‘line’ : data.line,
‘message’ : data.message,
‘level’ : data.level,
‘id’ : data.id,
});
self.socketLog = true;
self.collections[data.level].add(log);
});
[/sourcecode]

We create a socket connection with the server and start listening for the ‘newLog’ event, which contains the log data being streamed from winston.
For our app, since we are using Backbone, we create a new Log model and add that Log to the Logger collection, which contains a bunch of Logs.

Just to give an idea of how the Logger prototype is shaping up:
Screen Shot 2013-03-19 at 2.38.14 PM

In the end this works, but it could be better.
My idea is to deeply integrate a socket.io streaming functionality with winston, providing the option to start streaming logs straight out of the box. The goal is to make logs as useful as possible, and not just something that’s there but never used.

Node.js real time logging with Winston, Redis and Socket.io, p1

After I had the chance to hack some bugs on Firefox I noticed that they had a very strong logging system built in the project which made the act of logging stuff very easy and standardize.

A few built in logging macros that is very common to see across Firefox code are:

  • NS_ASSERTION
  • NS_WARNING
  • NS_ERROR

The complete list of macros can be found here

Keeping that in mind I thought it would be beneficial to implement something similar in the WebVirt project.
It not only helps developers to spot bugs and keep track of old issues that were appearing in the application, but it also helps users to have a direct access to the server logs, without the need to dig deep through several directories to find one text file with thousands of logs which makes very difficult to get any useful information out of it.

So the basic idea was to have different levels of logs, to make them more granular and provide a simple interface to allow users and developers to use the logs as a debugging tool.

Winston Logging Library

If you are working with Node.js there is a very solid library that is well maintained and used across several projects, the library being winston.

The beauty of winston it is the flexibility and the power it provides.
A quick overview on winston:

Transports
You can create different “Transports”, which are the final location where your logs are stored.
So it is possible to log everything to a database, the console and to a file at the same time.

Metadata
There is support for metadata on a per log basis. Which can be very useful to add extra information about an error that you might be logging.

Querying Logs
Another functionality that is built in the library is the querying of existing logs.
Independently of the Transport you might be using to save your logs, winston can query all the logs and provide them to you in json format with all the metadata tags parsed.

Streaming Logs
This is a very handy feature if you are planning to implement a real time logging solution with web sockets for example.

After looking at all the features winston provided it was a no brainier to start leveraging its power instead of writing a new one from scratch.

So now lets get into the details of how winston is working on WebVirt.

WebVirt Custom Logger

First of all we created a CustomLogger module which wraped winston giving us the flexibility to play around with the CustomLogger implementation without breaking the whole system since we can keep the logger API constant through out the app.

[sourcecode language=”javascript”]
var _logger;

var CustomLogger = function (config) {
}

CustomLogger.prototype = new events.EventEmitter();

CustomLogger.prototype.info = function (msg, metadata) {
};

CustomLogger.prototype.warn = function (msg, metadata) {
};

CustomLogger.prototype.error = function (msg, metadata) {
};

CustomLogger.prototype.query = function (options, cb) {
}

module.exports.inject = function (di) {
if (!_logger) {
_logger = new CustomLogger(di.config.logger);
}
return _logger;
}
[/sourcecode]

The CustomLogger implements a Singleton pattern which only instantiate one object for the whole app.
We allow a set of custom dependencies to be injected on the module which gives us even more flexibility to move things around without the risk of coupling them together.
We are also extending the EventEmitter functionality to give the options for the CustomLogger to emit its own events to whoever chooses to listen, that could be one option to implementing a real time web socket logging system but later on I’ll show that there is an easier way.
Finally, we just defined all methods we want to make publicly available on the logger.

After creating the skeleton for our CustomLogger we started to integrate winston in it.

[sourcecode language=”javascript”]
var CustomLogger = function (config) {
require(‘../external/winston-redis/lib/winston-redis.js’).Redis;

winston.handleExceptions(new winston.transports.Redis());
winston.exitOnError = false;

winston.remove(winston.transports.Console);
winston.add(winston.transports.Console, {
handleExceptions: true,
json: true
});
}

CustomLogger.prototype.info = function (msg, metadata) {
winston.info(msg, metadata);
};

CustomLogger.prototype.warn = function (msg, metadata) {
winston.warn(msg, metadata);
};

CustomLogger.prototype.error = function (msg, metadata) {
winston.error(msg, metadata);
};
[/sourcecode]

As you can see it’s very straight forward, the CustomLogger info, warn and error methods make a direct call to winston.
To initialize winston we require the winston-redis lib, which exposes the Redis Transport for Winston.
This leads me to the next topic:

Winston Redis Transport

Since we are already using Redis to store user information as well host details we chose to use Redis to store the logs too.
The winston-redis module is very easy to use and works out of the box, however, it didn’t fit the exact idea I had about the logging system of WebVirt.

We wanted to display different levels of logs to the user directly in the browser, however, we would need to have some sort of pagination control since the number of logs could go up to the thousands depending on the usage.
Not only that, we would also want to be able to search all logs of a particular level and also have a real time feature built in to display the logs with a websocket on the browser and even set some triggers to send some emails or any sort of notification based on pre-set filters.

With that being said, winston-redis saves all logs, independently of their level, to a single list on redis:

Screen Shot 2013-03-18 at 4.15.02 PM

So the ability to search and paginate the logs based on their level would be lost since they are all living in the same list.

To fix this issue and save logs on separate lists based on their levels we forked the lib and added an option to set a namespace for the redis container:

[sourcecode language=”javascript”]
Redis.prototype.log = function (level, msg, meta, callback) {
var self = this,
container = this.container(meta),
channel = this.channel && this.channel(meta);

// Separate logs based on their levels
container += ":" + level;
this.redis.llen(container, function (err, len) {
if (err) {
if (callback) callback(err, false);
return self.emit(‘error’, err);
}
// Assigns an unique ID to each log
meta.id = len + 1
var output = common.log({
level: level,
message: msg,
meta: meta,
timestamp: self.timestamp,
json: self.json,
});

// RPUSH may be better for poll-streaming.
self.redis.lpush(container, output, function (err) {
console.log("lpush callback");
console.log("err: ", err);
if (err) {
if (callback) callback(err, false);
return self.emit(‘error’, err);
}

self.redis.ltrim(container, 0, self.length, function () {
if (err) {
if (callback) callback(err, false);
return self.emit(‘error’, err);
}

if (channel) {
self.redis.publish(channel, output);
}

// TODO: emit ‘logged’ correctly,
// keep track of pending logs.
self.emit(‘logged’);

if (callback) callback(null, true);
});
});
});
};
[/sourcecode]

The only difference is that instead of logging everything to to a single “container” we append the level of the log to the “container”, thus splitting the logs into different lists:

Screen Shot 2013-03-18 at 4.15.36 PM

Now when we need to retrieve the logs we can specify how many and on which list we want to perform the query:

[sourcecode language=”javascript”]
CustomLogger.prototype.query = function (options, cb) {
var start = options.start || 1
, rows = options.rows || 50
, type = options.type || ‘redis’
, level = options.level || ‘error';

winston.query({
‘start': +start,
‘rows': +rows,
‘level': level
}, function (err, data) {
cb(err, data.redis);
})
}
[/sourcecode]

Something to keep in mind is that winston.query searches all Transports you have registered.
So if you are logging to multiple transports make sure you only use one Transport when reading the data back or you’ll get repeated values.

This sums it up the first part of the post.
Next I’ll post about how to integrate Socket.IO with Winston and stream logs real time to a browser.

Enabling CORS on a node.js server, Same Origin Policy issue

Recently we faced the famous “XMLHttprequest doesn’t allow Cross-Origin Resource Sharing” error.

To overcome the problem a very simple solution was needed.

Below I’ll try to give a quick overview of what is CORS and how we managed to work around the issue.

Cross-Origin Resource Sharing – CORS

In a nutshell CORS is the mechanism that allows a domain to request resources from another domain, for example, if domain http://websiteAAA tries to request resources from http://websiteBBB the browser won’t allow it due to Same Origin Policy restrictions.

The reason for having Same Origin Policy rules applied on the browser is to prevent unauthorized websites accessing content they don’t have permissions for.

I found a great example that emphasizes the need to have Same Origin Policies enforced by the browser: Say you log in to a service, like Google for example, then while logged in you go and visit a shady website that’s running some malware on it. Without Same Origin Policy rules, the shady website would be able to query Google with the authentication cookies saved in the browser from your session, which of course is a huge vulnerability.

Since HTTP is a stateless protocol, the Same Origin Policy rules allow the browser to establish a connection using session cookies and still keep each cookie private to the domain that made the request, encapsulating the privileges of each “service” running in the browser.

With that being said, imagine a situation where you, as a developer, need to communicate with an API sitting on a different domain. In this scenario you don’t want to hit the Same Origin Policy restrictions.

Workaround 1 – Request resources from a server

The most common way to get around this problem is to make the API request from your own server, where Same Origin Policy rules are not applied, and then provide the data back to the browser. However, this can be exploited:

Last semester I created an example of how an attacker would be able to spoof whole websites and apply a phishing attack circumventing Same Origin Policy restrictions.
The attack structure was very similar of how ARP poisoning is done.

A very brief overview of the attack:

  1. The user would land on a infected page
  2. The page would load a legitimate website by making a request from the attacker’s server where Same Origin Policies are not applied.
  3. The attacker would inject some code in the response to monitor the vicitms activity
  4. After the victm’s credentials were stolen he would stop the attack and redirect the user to the original requested page.

By spoofing the victim’s DNS it would make even harder to detect the attack, but even without DNS spoofing this approach would still catch some careless users.

All the code for the example is available on github
The attack was built on top of a nodeJS server and socketIO
The presentation slides for the attack can also be found here

Workaround 2 – JSONP

Another way to circumvent the problem is by using JSONP (JSON Padding). The wikipedia articles summarizes in a clear and simple way how JSONP works.

The magic of JSONP is to use a script tag to load a json file and provide a callback to run when the file finishes loading.

An example of using JSONP with jquery:

[sourcecode language=”javascript”]
$.ajax({
url: "http://website.com/file.json
dataType: ‘jsonp’,
success: function (data) {
// Manipulate the response here
}
});
[/sourcecode]

Even though making requests from your server or using JSONP can get around the Same Origin Policy restrictions it is not the best solution, which is why CORS started being implemented by the browser vendors.

With CORS a server can set the HTTP headers of the response with the information indicating if the resources can or can’t be loaded from a different origin.

If you are curious and want to snoop around looking into the HTTP response headers of a page, one way to do that is to use the developers tools that come with WebKit.
Below is a screenshot of all the resources loaded by the stack overflow home page.
Screen Shot 2013-02-14 at 6.34.24 PM

As you can see in the screenshot, the script loaded from careers.stackoverflow.com/gethired/js had the following HTTP header options appended to its response:

  • Access-Control-Allow-Headers
  • Access-Control-Allow-Methods
  • Access-Control-Allow-Origin

That means that if you want to make an ajax call to carrers.stackoverflow.com/gethired/js from your own page, the browser will not apply Same Origin Policy restrictions since the careers.stackoverflow server has indicated that the script is allowed to be loaded from different domains.
*An important note to make is that only the http://careers.stackoverflow.com/gethired/js has the Same Origin Rules turned off, however, the careersstackoverflow.com domain still has them enabled on other pages.

This means you can enable the header options on a response level, making sure a few API calls are open to the public without putting your whole server in danger of being exploited.

This lead us to our problem.

The Problem

In the set up we currently have, one computer plays the role of the API server and we were trying to query that API asynchronously from the browser with a page being served from a different domain.

The results, as expected, were that the call was blocked by the browser.

Solution

Instead of hacking around and trying to make the requests from a different server or using JSONP techniques we simply added the proper header options to the responses of the API server.

We are building our API on a nodeJS server, and to add extra headers options to the response could not have been easier:

First we added the response headers to one of the API calls and it worked perfectly, however we wanted to add the option to all our API calls, the solution: use a middleware.

We created a middleware which sets the response header options and pass the execution to the next registered function, the code looks like this:

[sourcecode language=”javascript”]
//CORS middleware
var allowCrossDomain = function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "X-Requested-With");
next();
}

app.configure(function () {
app.set(‘port’, config.interfaceServerPort);
app.set(‘views’, dirname + ‘/views’);
app.set(‘view engine’, ‘jade’);
app.use(express.favicon());
app.use(express.logger(‘dev’));
app.use(express.bodyParser());
app.use(express.methodOverride());
app.use(allowCrossDomain);
app.use(app.router);
app.use(express.static(path.join(
dirname, ‘public’)));
});

app.configure(‘development’, function(){
app.use(express.errorHandler());
});

// API routes
app.get(‘/list/vms’, routes.listGroup);
app.get(‘/list/vms/:ip’, routes.listSingle);
app.get(‘/list/daemons’, routes.listDaemons);
[/sourcecode]

That’s it for CORS, later we’ll cover another cool header option, the X-Frame-Options

If you are interested in finding more about Same Origin Policy or CORS check out this links:
http://en.wikipedia.org/wiki/JSONP
http://geekswithblogs.net/codesailor/archive/2012/11/02/151160.aspx
https://blog.mozilla.org/services/2013/02/04/implementing-cross-origin-resource-sharing-cors-for-cornice/
https://developers.google.com/storage/docs/cross-origin
http://www.tsheffler.com/blog/?p=428
http://techblog.hybris.com/2012/05/22/cors-cross-origin-resource-sharing/
http://security.stackexchange.com/questions/8264/why-is-the-same-origin-policy-so-important
http://www.w3.org/TR/cors/
https://developer.mozilla.org/en-US/docs/HTTP/AccesscontrolCORS
https://developer.mozilla.org/en-US/docs/Server-SideAccessControl
http://www.bennadel.com/blog/2327-Cross-Origin-Resource-Sharing-CORS-AJAX-Requests-Between-jQuery-And-Node-js.htm

Playing around with Redis, pros and cons

For the use case we are dealing with we had a few options when selecting a database solution to power the application.
The options in consideration were Mysql, Sqlite, MongoDB and Redis

Looking at SQL databases:
Due the fact that we would only need a database to save a few settings options and hosts information SQlite would have been a better option since it is a serverless database, different from mysql that has a client server model. In a sense SQLite can be think of as part of the whole application, which makes deployments much easier.

Looking at the NoSql databases:
One of the main differences between Mongo and Redis is that Mongo is a document oriented database while Redis is a simple Key-Value database.
That means that with Mongo you can have complex data structures while with Redis only a few different datatypes
Another big difference is that each document in Mongo has an uuid attached to it, so it makes it much easier to query and search information in the collections. Redis, on the other hand, only provides key-based searche queries, which can be a big problem if you are dealing with inherently relational data.

With that being said we decided to go ahead with Redis.

Looking closely into our problem, we found that we would mostly be using the database in a cache-like scenario. We would be caching information about the hosts in an automated process and the users wouldn’t have an option to manually change the hosts information.

The application has its own internal mechanisms to query information about the hosts in the network. Basically, the idea is to save all the hosts of a specific network and identify which ones belong to our cluster.

Another important point is that the interval of the scans the application will be performing on the network can be defined by the user. In the case that the probes interval gets really small it would neccesarily mean accessing the host information more often.

Redis itself keeps the data loaded in RAM, similar to memcache, which makes reads much faster. It also includes the benefit of having persistent storage by writing the data to the disk at regular intervals

Kieran wrote a blog post listing the details of the architecture of the app.

With that being said the schema we came up with is the following:

Use hashes to store the information about the hosts
To keep all the hashes grouped together we prefix them:
For example:
hosts:10.0.0.1
hosts:10.0.0.2
etc..

For the keys above we have the following attributes associated:

  • ip
  • status
  • type
  • lastOn

ip being the address of the host
status indicating if it is active
type differentiating between regular hosts and hypervisors
lastOn the last time the computer was seen active on the network

Screen Shot 2013-02-01 at 4.51.44 PM
One of the benefits of that approach is that once a host is added to the hash we don’t need to worry about having duplicate entries since when trying to add a hash with the same key it will update its value instead of creating a copy. So we can group both the create/update functionality together.

An example of how the process of finding new hosts on the network and saving them on the db:

[sourcecode language=”javascript”]
NetworkScanner.prototype.searchHosts = function (cb) {
exec("sudo nmap -sP –version-light –open –privileged 10.0.0.0/24", cb);
}

// Save active hosts
NetworkScanner.prototype.saveHosts = function (hosts, cb) {
var hosts = hosts.match(this.networkRegex);
var numberHosts = hosts.length;
while (host = hosts.pop()) {
var key = "hosts:" + host;
client.multi()
.hset(key, "ip", host)
.hset(key, "status", "on")
.hset(key, "type", "default")
.hset(key, "lastOn", "timestamp")
.exec(function (err, replies) {
!–numberHosts && cb();
}
}
};
[/sourcecode]

Then to search existing hosts for active hypervisors:

[sourcecode language=”javascript”]
// Scan port of active hosts
NetworkScanner.prototype.searchComputeNodes = function (cb) {
var hosts = new Array();
client.keys("hosts:*", function (err, keys) {
var keysLength = keys.length – 1; // 0 index
keys.forEach(function (val, index) {
hosts.push(val.split(":")[1]);
if (index === keysLength) {
exec("sudo nmap –version-light –open –privileged -p 80 " + hosts.join(" ") + "", cb);
}
});
});
};

// Save compute nodes
NetworkScanner.prototype.saveComputeNodes = function (computeNodes, cb) {
computeNodes = computeNodes.match(this.networkRegex);
var computeNodesLength = computeNodes.length – 1; // 0 index
computeNodes.forEach(function (val, index) {
client.hset("hosts:" + val, "type", "compute");
})
};
[/sourcecode]

For now we don’t have any benchmarks to show the performance difference between using Redis and other database solution so depending on how well the implementation of Redis goes, we might try some comparative benchmarking in future posts.