Thursday, December 15, 2022

Design patterns in Node.js: A practical guide

 

Design patterns are part of the day to day of any software developer, whether they realize it or not.

In this article, we will look at how to identify these patterns out in the wild and look at how you can start using them in your own projects.

What are design patterns?

Design patterns, simply put, are a way for you to structure your solution’s code in a way that allows you to gain some kind of benefit. Such as faster development speed, code reusability, and so on.

All patterns lend themselves quite easily to the OOP paradigm. Although given JavaScript’s flexibility, you can implement these concepts in non-OOP projects as well.

When it comes to design patterns, there are way too many of them to cover in just one article, in fact, books have been written exclusively about this topic and every year new patterns are created, leaving their lists incomplete.

A very common classification for the pattern is the one used in the GoF book (The Gang of Four Book) but since I’m going to be reviewing just a handful of them, I will ignore the classification and simply present you with a list of patterns you can see and start using in your code right now.

Immediately Invoked Function Expressions (IIFE)

The first pattern I’m going to show you is one that allows you to define and call a function at the same time. Due to the way JavaScript scopes works, using IIFEs can be great to simulate things like private properties in classes. In fact, this particular pattern is sometimes used as part of the requirements of other, more complex ones. We’ll see how in a bit.

What does an IIFE look like?

But before we delve into the use cases and the mechanics behind it, let me quickly show you what it looks like exactly:

(function() {
   var x = 20;
   var y = 20;
   var answer = x + y;
   console.log(answer);
})();

By pasting the above code into a Node.js REPL or even your browser’s console, you’d immediately get the result because, as the name suggests, you’re executing the function as soon as you define it.

The template for an IIFE consists of an anonymous function declaration, inside a set of parenthesis (which turn the definition into a function expression, a.k.a an assignment) and then a set of calling parenthesis at the end tail of it. Like so:

(function(/*received parameters*/) {
//your code here
})(/*parameters*/)

Use cases

Although it might sound crazy, there are actually a few benefits and use cases where using an IIFE can be a good thing, for example:

Simulating static variables

Remember static variables? From other languages such as C or C# for example. If you’re not familiar with them, a static variable gets initialized the first time you use it, and then it takes the value that you last set it to. The benefit being that if you define a static variable inside a function, that variable will be common to all instances of the function, no matter how many times you call it, so it greatly simplifies cases like this:

function autoIncrement() {
    static let number = 0
    number++
    return number
}

The above function would return a new number every time we call it (assuming, of course, the static keyword is available for us in JS). We could do this with generators in JS, that’s true, but pretend we don’t have access to them, you could simulate a static variable like this:

let autoIncrement = (function() {
    let number = 0

    return function () {
     number++
     return number
    }
})()

What you’re seeing in there, is the magic of closures all wrapped up inside an IIFE. Pure magic. You’re basically returning a new function that will be assigned to the autoIncrement variable (thanks to the actual execution of the IIFE). And with the scoping mechanics of JS, your function will always have access to the number variable (as if it were a global variable).

Simulating private variables

As you may (or may not, I guess) already know, ES6 classes treat every member as public, meaning, there are no private properties or methods. That’s out of the question, but thanks to IIFEs you could potentially simulate that if you wanted to.

const autoIncrementer = (function() {
  let value = 0;

  return {
    incr() {
        value++
    },

    get value() {
        return value
    }
  };
})();
> autoIncrementer.incr()
undefined
> autoIncrementer.incr()
undefined
> autoIncrementer.value
2
> autoIncrementer.value = 3
3
> autoIncrementer.value
2

The above code shows you a way to do it. Although you’re not specifically defining a class which you can instantiate afterward, mind you, you are defining a structure, a set of properties and methods which can make use of variables that are common to the object you’re creating, but that are not accessible (as shown through the failed assignment) from outside.

Factory method pattern

This one, in particular, is one of my favorite patterns, since it acts as a tool you can implement to clean your code up a bit.

In essence, the factory method allows you to centralize the logic of creating objects (meaning, which object to create and why) in a single place. This allows you to forget about that part and focus on simply requesting the object you need and then using it.

This might seem like a small benefit, but bear with me for a second, it’ll make sense, trust me.

What does the factory method pattern look like?

This particular pattern would be easier to understand if you first look at its usage, and then at its implementation.

Here is an example:

( _ => {

    let factory = new MyEmployeeFactory()

    let types = ["fulltime", "parttime", "contractor"]
    let employees = [];
    for(let i = 0; i < 100; i++) {
     employees.push(factory.createEmployee({type: types[Math.floor( (Math.random(2) * 2) )]})    )}
    
    //....
    employees.forEach( e => {
     console.log(e.speak())
    })

})()

The key takeaway from the above code is the fact that you’re adding objects to the same array, all of which share the same interface (in the sense they have the same set of methods) but you don’t really need to care about which object to create and when to do it.

You can now look at the actual implementation, as you can see, there is a lot to look at, but it’s quite straightforward:

class Employee {

    speak() {
     return "Hi, I'm a " + this.type + " employee"
    }

}

class FullTimeEmployee extends Employee{
    constructor(data) {
     super()
     this.type = "full time"
     //....
    }
}


class PartTimeEmployee extends Employee{
    constructor(data) {
     super()
     this.type = "part time"
     //....
    }
}


class ContractorEmployee extends Employee{
    constructor(data) {
     super()
     this.type = "contractor"
     //....
    }
}

class MyEmployeeFactory {

    createEmployee(data) {
     if(data.type == 'fulltime') return new FullTimeEmployee(data)
     if(data.type == 'parttime') return new PartTimeEmployee(data)
     if(data.type == 'contractor') return new ContractorEmployee(data)
    }
}

Use case

The previous code already shows a generic use case, but if we wanted to be more specific, one particular use case I like to use this pattern for is handling error object creation.

Imagine having an Express application with about 10 endpoints, wherein every endpoint you need to return between two to three errors based on the user input. We’re talking about 30 sentences like the following:

if(err) {
  res.json({error: true, message: Error message here”})
}

Now, that wouldn’t be a problem, unless of course, until the next time you had to suddenly add a new attribute to the error object. Now you have to go over your entire project, modifying all 30 places. And that would be solved by moving the definition of the error object into a class. That would be great unless of course, you had more than one error object, and again, you’re having to decide which object to instantiate based on some logic only you know. See where I’m trying to get to?


More great articles from LogRocket:


If you were to centralize the logic for creating the error object then all you’d have to do throughout your code would be something like:

if(err) {
  res.json(ErrorFactory.getError(err))
}

That is it, you’re done, and you never have to change that line again.

Singleton pattern

This one is another oldie but a goodie. It’s quite a simple pattern, mind you, but it helps you keep track of how many instances of a class you’re instantiating. Actually, it helps you keep that number to just one, all of the time. Mainly, the singleton pattern, allows you to instantiate an object once, and then use that one every time you need it, instead of creating a new one without having to keep track of a reference to it, either globally or just passing it as a dependency everywhere.

What does the singleton pattern look like?

Normally, other languages implement this pattern using a single static property where they store the instance once it exists. The problem here is that, as I mentioned before, we don’t have access to static variables in JS. So we could implement this in two ways, one would be by using IIFEs instead of classes.

The other would be by using ES6 modules and having our singleton class using a locally global variable, in which to store our instance. By doing this, the class itself gets exported out of the module, but the global variable remains local to the module.

I know, but trust me, it sounds a lot more complicated than it looks:

let instance = null

class SingletonClass {

    constructor() {
     this.value = Math.random(100)
    }

    printValue() {
     console.log(this.value)
    }

    static getInstance() {
     if(!instance) {
         instance = new SingletonClass()
     }

     return instance
    }
}

module.exports = SingletonClass

And you could use it like this:
const Singleton = require(“./singleton”)

const obj = Singleton.getInstance()
const obj2 = Singleton.getInstance()

obj.printValue()
obj2.printValue()

console.log("Equals:: ", obj === obj2)

The output of course being:

0.5035326348000628
0.5035326348000628
Equals::  true

Confirming that indeed, we’re only instantiating the object once, and returning the existing instance.

Use cases

When trying to decide if you need a singleton-like implementation or not, you need to consider something: how many instances of your classes will you really need? If the answer is 2 or more, then this is not your pattern.

But there might be times when having to deal with database connections that you might want to consider it.

Think about it, once you’ve connected to your database, it might be a good idea to keep that connection alive and accessible throughout your code. Mind you, this can be solved in a lot of different ways, yes, but this pattern is indeed, one of them.

Using the above example, we can extrapolate it into something like this:

const driver = require("...")

let instance = null


class DBClass {

    constructor(props) {
     this.properties = props
     this._conn = null
    }

    connect() {
     this._conn = driver.connect(this.props)
    }

    get conn() {
     return this._conn
    }

    static getInstance() {
     if(!instance) {
         instance = new DBClass()
     }

     return instance
    }
}

module.exports = DBClass

And now, you’re sure that no matter where you are if you’re using the getInstance method, you’ll be returning the only active connection (if any).

Observer pattern

This one is a very interesting pattern, in the sense that it allows you to respond to certain input by being reactive to it, instead of proactively checking if the input is provided. In other words, with this pattern, you can specify what kind of input you’re waiting for and passively wait until that input is provided in order to execute your code. It’s a set and forget kind of deal, if you will.

In here, the observers are your objects, which know the type of input they want to receive and the action to respond with, these are meant to “observe” another object and wait for it to communicate with them.

The observable, on the other hand, will let the observers know when a new input is available, so they can react to it, if applicable. If this sounds familiar, it’s because it is, anything that deals with events in Node is implementing this pattern.

What does the observer pattern look like?

Have you ever written your own HTTP server? Something like this:

const http = require('http');


const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Your own server here');
});

server.on('error', err => {
    console.log(“Error:: “, err)
})

server.listen(3000, '127.0.0.1', () => {
  console.log('Server up and running');
});

There, hidden in the above code, you’re looking at the observer pattern in the wild. An implementation of it, at least. Your server object would act as the observable, whilst your callback function is the actual observer. The event-like interface here (see the bolded code), with the on method, and the event name there might obfuscate the view a bit, but consider the following implementation:

class Observable {

    constructor() {
     this.observers = {}
    }

    on(input, observer) {
     if(!this.observers[input]) this.observers[input] = []
     this.observers[input].push(observer)
    }

    triggerInput(input, params) {
     this.observers[input].forEach( o => {
         o.apply(null, params)    
     })
    }
}

class Server extends Observable {

    constructor() {
     super()
    }


    triggerError() {
     let errorObj = {
         errorCode: 500,
         message: 'Port already in use'
     }
     this.triggerInput('error', [errorObj])
    }
}

You can now, again, set the same observer, in exactly the same way:

server.on('error', err => {
    console.log(“Error:: “, err)
})

And if you were to call the triggerError method (which is there to show you how you would let your observers know that there is new input for them), you’d get the exact same output:

Error:: { errorCode: 500, message: 'Port already in use' }

If you were to be considering using this pattern in Node.js, please look at the EventEmitter object first, since it’s Node.js’ own implementation of this pattern, and might save you some time.

Use cases

This pattern is, as you might have already guessed, great for dealing with asynchronous calls, since getting the response from an external request can be considered a new input. And what do we have in Node.js, if not a constant influx of asynchronous code into our projects? So next time you’re having to deal with an async scenario consider looking into this pattern.

Another widely spread use case for this pattern, as you’ve seen, is that of triggering particular events. This pattern can be found on any module that is prone to having events triggered asynchronously (such as errors or status updates). Some examples are the HTTP module, any database driver, and even socket.io, which allows you to set observers on particular events triggered from outside your own code.

Chain of responsibility

The chain of responsibility pattern is one that many of use in the world of Node.js have used, without even realizing it.

It consists of structuring your code in a way that allows you to decouple the sender of a request with the object that can fulfill it. In other words, having object A sending request R, you might have three different receiving objects R1, R2, and R3, how can A know which one it should send R to? Should A care about that?

The answer to the last question is: no, it shouldn’t. So instead, if A shouldn’t care about who’s going to take care of the request, why don’t we let R1, R2 and R3 decide by themselves?

Here is where the chain of responsibility comes into play, we’re creating a chain of receiving objects, which will try to fulfill the request and if they can’t, they’ll just pass it along. Does it sound familiar yet?

What does the chain of responsibility look like?

Here is a very basic implementation of this pattern, as you can see at the bottom, we have four possible values (or requests) that we need to process, but we don’t care who gets to process them, we just need, at least, one function to use them, hence we just send it to the chain and let each one decide whether they should use it or ignore it.

function processRequest(r, chain) {

    let lastResult = null
    let i = 0
    do {
     lastResult = chain[i](r)
     i++
    } while(lastResult != null && i < chain.length)
    if(lastResult != null) {
     console.log("Error: request could not be fulfilled")
    }
}

let chain = [
    function (r) {
     if(typeof r == 'number') {
         console.log("It's a number: ", r)
         return null
     }
     return r
    },
    function (r) {
     if(typeof r == 'string') {
         console.log("It's a string: ", r)
         return null
     }
     return r
    },
    function (r) {
     if(Array.isArray(r)) {
         console.log("It's an array of length: ", r.length)
         return null
     }
     return r
    }
]

processRequest(1, chain)
processRequest([1,2,3], chain)
processRequest('[1,2,3]', chain)
processRequest({}, chain)

The output being:

It's a number:  1
It's an array of length:  3
It's a string:  [1,2,3]
Error: request could not be fulfilled

Use cases

The most obvious case of this pattern in our ecosystem is the middlewares for ExpressJS. With that pattern, you’re essentially setting up a chain of functions (middlewares) that evaluate the request object and decide to act on it or ignore it. You can think of that pattern as the asynchronous version of the above example, where instead of checking if the function returns a value or not, you’re checking what values are passed to the next callback they call.

var app = express();

app.use(function (req, res, next) {
  console.log('Time:', Date.now());
  next(); //call the next function on the chain
});

Middlewares are a particular implementation of this pattern since instead of only one member of the chain fulfilling the request, one could argue that all of them could do it. Nevertheless, the rationale behind it is the same.

200’s only  Monitor failed and slow Node requests in production

Deploying Node design patterns can provide huge benefit to ensuring your app is performant and reliable. If you’re interested in ensuring network requests to a Node backend or 3rd party services are successful, try LogRockethttps://logrocket.com/signup/

LogRocket is like a DVR for web apps, recording literally everything that happens on your site. Instead of guessing why problems happen, you can aggregate and report on problematic network requests to quickly understand the root cause.

LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, and slow network requests as well as logs Redux, NgRx. and Vuex actions/state. .

Final thoughts

These are but a few patterns that you might run into daily without even realizing it. I’d encourage you to look into the rest of them, even if you don’t find an immediate use case, now that I’ve shown you how some of them look in the wild, you might start seeing them yourselves! Hopefully, this article has shed some light on this subject and helps you improve your coding-foo faster than ever. See you on the next one!

React Profiler: A Step by step guide to measuring app performance

  As react applications are becoming more and more complex, measuring application performance becomes a necessary task for developers. React...