Optimizing JavaScript for extreme performance and low memory consumption


While making WidgetCity, I had to use various measures to speed up the code. In this post, I’ll show you the tricks I learned – some of which I haven’t seen mentioned anywhere else before.

A lot of performance optimization tips for JavaScript code involve things that you more typically see in web sites, like minimizing the amount of DOM modifications. But this case was different in that it was the script itself that had to run faster – it didn’t work with the DOM or anything, just processed a lot of data.

Finding the source of the problems

Before code optimization can take place, it should be profiled to find out what code is running slowly. The most useful tool for this when working with JavaScript is definitely Firebug’s Profiler feature. Without it, I probably would have much more difficulties in finding out problems and testing how changing things affect the speed.

You can find Firebug’s profiler in the Console tab. Just hit the profiler button once to begin the profiling, and hit the button again to stop. After stopping, you will receive a informative list of the functions which had been called during the profiling and how long each of them took.

Problem 1: Iteration, the amount of data

The first problem in WidgetCity was the sheer amount of data it had to store for the simulation. A full 128×128 tile map would require 16 384 iterations to be completely processed. This may not seem a very large amount, but just iterating the data, without doing anything else, caused a bigger hit than it would in most other languages.

As odd as it might sound, you can use a reversed do-while loop to speed up iteration – instead of using a for loop and counting up, you use a do-while loop and count down.

var i = data.length;
do {
 /* some code */
} while(--i);

Why does this run faster? Apparently the removal of the simple condition used to test when to end the loop makes most of the difference, and the fact that you do –i instead of i– also accords for some.

The problem with this approach is that it isn’t always applicable.

Problem 2: Calling functions

A quite surprising find was that calling functions added very significant overhead. Just having a single function call inside one of those large for-loops could add a lot of processing time.

How to solve this? Inline the function’s code inside the loop.

Instead of calling the function in the loop, you would simply move the code of the function inside the loop itself. This does have a large downside by reducing the readability of the code, and possibly introducing code duplication if the same function call is used in more than one place.

This was one of the biggest performance boosters I found, which in this case was more important than code readability.

Problem 3: Memory limitations

Due to the low amount of memory available on the Nokia N95, the application ran out of memory on a few occasions.

First, the original map size was 300×300, which made the phone run out of memory immediately when creating the map. There was no other fix to this than to reduce the size of the map to the final 128×128. Using a 300×300 map could also have had other performance repercussions, as the amount of data it held would’ve been much larger.

Second, when implementing saving and loading games, the app would again run out of memory. This was probably because the data of the map was being serialized into JSON.

Remco Lanting came up with the idea that solved this: Splitting the data in smaller pieces.

Instead of saving the whole map array of 128×128 in one shot, the code saves it in 8 pieces. This way the size of the serialized JSON string is kept smaller, and the app won’t run out of memory.

Same is done when loading: As the JSON data is saved as 8 separate “blocks”, it is also eval’d back into the array separately.

More optimizations: Dividing and flooring numbers

A part of the game logic also required dividing some numbers and making sure the result was a full number.

Typically this requires you to first divide the number, and then apply Math.floor. As mentioned, calling functions can be expensive.

There is one “neat” trick for this, though. “Neat” because it can be very confusing for people who aren’t familiar with the syntax, and it does make the code somewhat more difficult to read.

The trick involves using a bit-shift:

var foo = 10;
//for most purproses, this is the same as doing Math.floor(foo / 4)
var result = foo >> 2;

Doing >> 2 is, as mentioned in the comment, pretty much the same as first dividing by 4 and then calling Math.floor. But instead of two operations, you only have one, so it can be a bit faster.

If you don’t understand binary math, bitshifts can be difficult to understand. Put simply, if you do >> with 1, it’s the same as division by 2, 2 is division by 4, 3 is division by 8, and 4 is division by 16 and so on.

As usual, there’s a good Wikipedia article on this, which is a good resource to read if you want more information on this.

Even more: Wrap code in anonymous functions

This one was suggested by fearphage – wrap all code in anonymous functions, even if they weren’t actually doing any globals.

For whatever reasons I can’t properly explain, this also affected the execution speed of the scripts.

So whenever you have code in a JS file, remember to put it inside a self-executing anonymous function like this:

(function(window) {
/* all code in your file goes here */

You can also make the function take the window object as an argument for a small possible enhancement.

Lastly: Reduce scope traversal

This one adds on the previous one, again suggested by fearphage: Add local variables inside the anonymous function for commonly used functions, like Math.round or Math.random

(function(window) {
var round = Math.round;
var random = Math.random;
/* all code in your file goes here */


So there are various ways of speeding up just JavaScript execution speed, when you don’t have anything like DOM in the way.

In addition to what I showed here, there was one more thing I tried: Seeing if there is any difference between calling an object function vs. an instance function:

var x = new Foo();

I was not able to get a difference between this, so if you like using classical OOP style instances instead of “static” functions in objects, go for it.

Just remember, that you should always profile your code before optimization and after optimization, to see where the slow parts are and if your modifications had any effect.

Also consider that many optimizations can make maintaining the code more difficult, so if you don’t absolutely need the speedup, it may better to not do it.