Back to Basics: Pushing your JavaScript Performance to the Max

Javascript was not invented as a language for performance sake. However, due to the sheer amount of apps that are being built on node.js, it is impossible that none of them suffer from this performance problems, even with databse indexing, redis caching, etc.

A few fixes have been proposed to reduce unnecessary operations, such as all the rules that ESLint provides, though we can do better. For example, you probably know that async operations take significantly longer than sync operations. We’re not particularly interested in the details of what an async operation does, but it might be fruitful to know how much of a performance boost omitting such an operation would give us.

In other words, if I have an operation that must be wrapped in a Promise, what is the minimum amount of time that it will require to resolve? If I could write a piece of code that does the same thing as the async operation, but requires hundreds of lines more, which operation would be more pereformative? These types of questions, even by considering algorithmic complexity, are sometimes not easy to answer.

Furthermore, there are a lot of uncertainties when writing JavaScript. In C++ you need to define memory allocation for every variable, though in JavaScript, it is unclear how memory is managed (which is why I think TensorFlow.js is a bad idea). Even professional developers who have been programming for years, might not know which array method can achieve the fastest result: map, forEach and the for-loop all do similar things, but which construct should one use, if we are trying to maximize performance?

Most npm packages don’t deal with such questions, which is why I decided to create my own little one with which we can assess the performance for any given function.

It basically takes a function (or functions) as parameters, runs the given method some number amount of times and returns an object that gives us performance information about our function:

// sync function vs async function
const speeder = require(“speeder”);

function syncReturn() {
  return “returned!”;

async function asyncReturn() {
  return “returned!”;
async function compareFunctions() {
  const results = await speeder([syncReturn, asyncReturn], {
    names: [“Sync Simple Return”, “Async Simple Return”],


This way we can assess our overall performance for any given functions that we write. While it may not be great for commercial use (it doesn’t take into account the frequency of execution), the tool allows us a beginner level, hands-on approach to learning what’s going on underneath the surface. With it, we can answer some basic questions about Javascript performance, without needing to know anything about the architecture of node.js, or any other frameworks we might be using.

So at this point, let’s get back to our example question: How much faster is a synchronous operation over an asynchronous one? When we run the code above, we get this:

     min: 0.0003470182418823242,
     max: 0.5113691091537476,
     mean: 0.0009858494997024537,
     median: 0.0003720521926879883,
     variance: 0.0002635236615587046,
     std: 0.016233411889023965,
     counts: 1000,
     name: 'Sync Simple Return'
     min: 0.00033605098724365234,
     max: 1.022467017173767,
     mean: 0.0019857296943664553,
     median: 0.0005559921264648438,
     variance: 0.0012207576125402922,
     std: 0.034939341901934734,
     counts: 1000,
     name: 'Async Simple Return'

Interesting! At least on my machine, the async operation seems to take around twice as longer than the sync operation. This means that if we can achieve the same result in a synchronous function as in an async function in that amount of time, we can almost guarantee that the synchronous will be faster.

But now let’s say that you have two async operations that need to be run: Is putting them in a Promise.all() faster, or would it make more sense to run them serially? When ran, we get this:

min: 0.0008840560913085938,
max: 6.775617599487305,
mean: 0.01129733371734619,
median: 0.0012230873107910156,
variance: 0.048514687320896054,
std: 0.22026049877564532,
counts: 1000,
name: ‘Serial async return’
min: 0.00128936767578125,
max: 0.2218303680419922,
mean: 0.0022107877731323243,
median: 0.0013818740844726562,
variance: 0.0001237706249184695,
std: 0.011125224713167348,
counts: 1000,
name: ‘Parallel async return’

As one might imagine, bundling up our async functions in a Promise.all is faster. However, this is not always the case: The time needed differs for example, when our async functions use timeouts (you can read more about this on this StackOverflow issue), which may be hidden somewhere.

This gives students and hobbyists the chance to see what is really going on underneath JavaScript, as well as in their code base.

There is no reason we cannot ask many, many more questions about general performance: map, forEach or the for-loop performance is an open problem, as mentioned. But what about declaring let vs const? Array.concat vs the spread operator? 

One controversial question that has come up at work was this: In async functions, should we await return, or simply return a promise, expecting our machine to resolve it automatically? ESLint tells us that we shouldn’t use an await before a return, but there is no clear answer as to why. With our package, we can prove to ourselves that things do go a little faster if we remove the await:

   mean: 0.026724271774291992,
   name: ‘async return someAsyncFunction’
   mean: 0.015191343307495117,
   name: ‘async return await someAsyncFunction’

I’ve probed several other, similar questions (source code here) and these are my results (these record the mean differences, using node 12):

Assigning and garbage collecting of const vs let vs var? const (52% faster)

Executing console.error vs console.log? console.error (41% faster)

Using Array.concatvs the spread operator? concat (24% faster)

Using for-in loop vs Object.keys+ for-loop? Object.keys + for-loop (92% faster)

Performing Array.push vs direct assignment? Direct-assignment (36% faster)

Array.reduce or for-loop adding? Array.reduce (190% faster (!))

Array.reduce or forEach adding? Varies.

await operation length?”: 5ms

Though sometimes obvious which method is faster, it is good to get a definite result, instead of relying on our intuitions. Only by probing such questions, do we allow ourselves to become better at what we do, and create performative products.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: