JavaScript can benefit from many of the same speed-optimization techniques that are used in other languages, like C1,2 and Java. Algorithms and data structures, caching frequently used values, loop unrolling and hoisting, removing tail recursion, and strength-reduction techniques all have a place in your JavaScript optimization toolbox. However, how you interact with the Document Object Model (DOM) in large part determines how efficiently your code executes.
Unlike other programming languages, JavaScript manipulates web pages through a relatively sluggish API, the DOM. Interacting with the DOM is almost always more expensive than straight computations. After choosing the right algorithm and data structure and refactoring, your next consideration should be minimizing DOM interaction and I/O operations.
With most programming languages, you can trade space for time complexity and vice versa.3 But on the web, JavaScripts must be downloaded. Unlike desktop applications where you can trade another kilobyte or two for speed, with JavaScript you have to balance execution speed versus file size.
How Fast Is JavaScript?
Unlike C, with its optimizing compilers that increase execution speed and decrease file size, JavaScript is an interpreted language that usually is run over a network connection (unless you count Netscape’s Rhino, which can compile and optimize JavaScript into Java byte code for embedded applications4). This makes JavaScript relatively slow compared to compiled languages.5 However, most scripts are usually so small and fast that users won’t notice any speed degradation. Longer, more complex scripts are where this chapter can help jumpstart your JavaScript.
Design Levels
A hierarchy of optimization levels exists for JavaScript, what Bentley and others call design levels.6 First comes the global changes like using the right algorithms and data structures that can speed up your code by orders of magnitude. Next comes refactoring that restructures code in a disciplined way into a simpler, more efficient form7). Then comes minimizing DOM interaction and I/O or HTTP requests. Finally, if performance is still a problem, use local optimizations like caching frequently used values to save on recalculation costs. Here is a summary of the optimization process:
- Choose the right algorithm and data structure.
- Refactor to simplify code.
- Minimize DOM and I/O interaction.
- Use local optimizations last.
When optimizing your code, start at the highest level and work your way down until the code executes fast enough. For maximum speed, work at multiple levels.
Measure Your Changes
Measurement is a key part of the optimization process. Use the simplest algorithms and data structures you can, and measure your code’s performance to see whether you need to make any changes. Use timing commands or profilers to locate any bottlenecks. Optimize these hot spots one at a time, and measure any improvement. You can use the date object to time individual snippets:
<script type="text/javascript"> function DoBench(x){ var startTime,endTime,gORl='local'; if(x==1){ startTime=new Date().getTime(); Bench1(); endTime=new Date().getTime(); }else{ gORl='global'; startTime=new Date().getTime(); Bench2(); endTime=new Date().getTime(); } alert('Elapsed time using '+gORl+' variable: '+((endTime-startTime)/1000)+' seconds.'); } ... </script>
This is useful when comparing one technique to another. But for larger projects, only a profiler will do. Mozilla.org includes the Venkman profiler in the Mozilla browser distribution to help optimize your JavaScript.
The Venkman JavaScript Profiler
For more information on the Venkman profiler, see the following web sites:
The Pareto Principle
Economist Vilfredo Pareto found in 1897 that about 80 percent of Italy’s wealth was owned by about 20 percent of the population.8 This has become the 80/20 rule or the Pareto principle, which is often applied to a variety of disciplines. Although some say it should be adjusted to a 90/10 rule, this rule of thumb applies to everything from employee productivity and quality control to programming.
Barry Boehm found that 20 percent of a program consumes 80 percent of the execution time.9 He also found that 20 percent of software modules are responsible for 80 percent of the errors.10 Donald Knuth found that more than 50 percent of a program’s run time is usually due to less than 4 percent of the code.11 Clearly, a small portion of code accounts for the majority of program execution time. Concentrate your efforts on these hot areas.
Algorithms and Data Structures
As we learn in computer science classes, global optimizations (such as algorithm and data structure choices) determine in large part the overall performance of our programs. For larger values of “n,” or the number of input elements, the complexity of running time can dominate any local optimization concerns. This complexity is expressed in O-notation, where complexity or “order” is expressed as a function of n. Table 10.1 shows some examples.
Table 10.1 Run-Time Complexity of Classic Algorithms12, 13
Notation
Name
Example
O(1)
constant
array index, simple statements
O(logn)
logarithmic
binary search
O(n)
linear
string comparison, sequential search
O(nlogn)
nlogn
quicksort and heapsort
O(n2)
quadratic
simple selection and insertion sorting methods (two loops)
O(n3)
cubic
matrix multiplication of nxn matrices
O(2n)
exponential
set partitioning (traveling salesman)
Array access or simple statements are constant-time operations, or O(1). Well-crafted quicksorts run in nlogn time or O(nlogn). Two nested for loops take on the order of nxn or O(n2) time. For low values of n, choose simple data structures and algorithms. As your data grows, use lower-order algorithms and data structures that will scale for larger inputs.
Use built-in functions whenever possible (like the Math object), because these are generally faster than custom replacements. For critical inner loops, measure your changes because performance can vary among different browsers.
Refactor to Simplify Code
Refactoring is the art of reworking your code to a more simplified or efficient form in a disciplined way. Refactoring is an iterative process:
-
Write correct, well-commented code that works.
-
Get it debugged.
-
Streamline and refine by refactoring the code to replace complex sections with shorter, more efficient code.
-
Mix well, and repeat.
Refactoring clarifies, refines, and in many cases speeds up your code. Here’s a simple example that replaces an assignment with an initialization. So instead of this:
function foo() { var i; // .... i = 5; }
Do this:
function foo() { var i = 5; // .... }
For More Information
Refactoring is a discipline unto itself. In fact, entire books have been written on the subject. See Martin Fowler’s book, Refactoring: Improving the Design of Existing Code (Addison-Wesley, 1999). See also his catalog of refactorings at http://www.refactoring.com/.
Minimize DOM Interaction and I/O
Interacting with the DOM is significantly more complicated than arithmetic computations, which makes it slower. When the JavaScript interpreter encounters a scoped object, the engine resolves the reference by looking up the first object in the chain and working its way through the next object until it finds the referenced property. To maximize object resolution speed, minimize the scope chain of objects. Each node reference within an element’s scope chain means more lookups for the browser. Keep in mind that there are exceptions, like the window object, which is faster to fully reference. So instead of this:
var link = location.href;
Do this:
var link = window.location.href;
Minimize Object and Property Lookups
Object-oriented techniques encourage encapsulation by tacking sub-nodes and methods onto objects. However, object-property lookups are slow, especially if there is an evaluation. So instead of this:
for(var i = 0; i < 1000; i++) a.b.c.d(i);
Do this:
var e = a.b.c.d; for(var i = 0; i < 1000; i++) e(i);
Reduce the number of dots (object.property) and brackets (object[“property”]) in your program by caching frequently used objects and properties. Nested properties are the worst offenders (object.property.property.property).
Here is an example of minimizing lookups in a loop. Instead of this:
for (i=0; i<someArrayOrObject.length; i++)
Do this:
for (i=0, var n=someArrayOrObject.length; i<n; i++)
Also, accessing a named property or object requires a lookup. When possible, refer to the object or property directly by using an index into an object array. So instead of this:
var form = document.f2; // refer to form by name
Do this:
var form = document.forms[1]; // refer to form by position
Shorten Scope Chains
Every time a function executes, JavaScript creates an execution context that defines its own little world for local variables. Each execution context has an associated scope chain object that defines the object’s place in the document’s hierarchy. The scope chain lists the objects within the global namespace that are searched when evaluating an object or property. Each time a JavaScript program begins executing, certain built-in objects are created.
The global object lists the properties (global variables) and predefined values and functions (Math, parseInt(), etc.) that are available to all JavaScript programs.
Each time a function executes, a temporary call object is created. The function’s arguments and variables are stored as properties of its call object. Local variables are properties of the call object.
Within each call object is the calling scope. Each set of brackets recursively defines a new child of that scope. When JavaScript looks up a variable (called variable name resolution), the JavaScript interpreter looks first in the local scope, then in its parent, then in the parent of that scope, and so on until it hits the global scope. In other words, JavaScript looks at the first item in the scope chain, and if it doesn’t find the variable, it bubbles up the chain until it hits the global object.
That’s why global scopes are slow. They are worst-case scenarios for object lookups.
During execution, only with statements and catch clauses affect the scope chain.
Avoid with Statements
The with statement extends the scope chain temporarily with a computed object, executes a statement with this longer scope chain, and then restores the original scope chain. This can save you typing time, but cost you execution time. Each additional child node you refer to means more work for the browser in scanning the global namespace of your document. So instead of this:
with (document.formname) { field1.value = "one"; field2.value = "two";... }
Do this:
var form = document.formname; form.field1.value = "one"; form.field2.value = "two;
Cache the object or property reference instead of using with, and use this variable for repeated references. with also has been deprecated, so it is best avoided.
Add Complex Subtrees Offline
When you are adding complex content to your page (like a table), you will find it is faster to build your DOM node and all its sub-nodes offline before adding it to the document. So instead of this (see Listing 10.1):
Listing 10.1 Adding Complex Subtrees Online
var tableEl, rowEl, cellEl; var numRows = 10; var numCells = 5; tableEl = document.createElement("TABLE"); tableEl = document.body.appendChild(tableEl); for (i = 0; i < numRows; i++) { rowEl = document.createElement("TR"); for (j = 0; j < numCells;j++) { cellEl = document.createElement("TD"); cellEl.appendChild(document.createTextNode("[row "+i+" cell "+j+ "]")); rowEl.appendChild(cellEl); } tableEl.appendChild(rowEl); }
Do this (see Listing 10.2):
Listing 10.2 Adding Complex Subtrees Offline
var tableEl, rowEl, cellEl; var numRows = 10; var numCells = 5; tableEl = document.createElement("TABLE"); for (i = 0; i < numRows; i++) { rowEl = document.createElement("TR"); for (j = 0; j < numCells;j++) { cellEl = document.createElement("TD"); cellEl.appendChild(document.createTextNode("[row " +i+ " cell "+j+"]")); rowEl.appendChild(cellEl); } tableEl.appendChild(rowEl); } document.body.appendChild(tableEl);
Listing 10.1 adds the table object to the page immediately after it is created and adds the rows afterward. This runs much slower because the browser must update the page display every time a new row is added. Listing 10.2 runs faster because it adds the resulting table object last, via document.body.appendChild().
Edit Subtrees Offline
In a similar fashion, when you are manipulating subtrees of a document, first remove the subtree, modify it, and then re-add it. DOM manipulation causes large parts of the tree to recalculate the display, slowing things down. Also, createElement() is slow compared to cloneNode(). When possible, create a template subtree, and then clone it to create others, only changing what is necessary. Let’s combine these two optimizations into one example. So instead of this (see Listing 10.3):
Listing 10.3 Editing Subtrees Online
var ul = document.getElementById("myUL"); for (var i = 0; i < 200; i++) { ul.appendChild(document.createElement("LI")); }
Do this (see Listing 10.4):
Listing 10.4 Editing Subtrees Offline
var ul = document.getElementById("myUL"); var li = document.createElement("LI"); var parent = ul.parentNode; parent.removeChild(ul); for (var i = 0; i < 200; i++) { ul.appendChild(li.cloneNode(true)); } parent.appendChild(ul);
By editing your subtrees offline, you’ll realize significant performance gains. The more complex the source document, the better the gain. Substituting cloneNode instead of createElement adds an extra boost.
Concatenate Long Strings
By the same token, avoid multiple document.writes in favor of one document.write of a concatenated string. So instead of this:
document.write(' string 1'); document.write(' string 2'); document.write(' string 3'); document.write(' string 4');
Do this:
var txt = ' string 1'+ ' string 2'+ ' string 3'+ ' string 4'; document.write(txt);
Access NodeLists Directly
NodeLists are lists of elements from object properties like .childNodes and methods like getElementsByTagName(). Because these objects are live (updated immediately when the underlying document changes), they are memory intensive and can take up many CPU cycles. If you need a NodeList for only a moment, it is faster to index directly into the list. Browsers are optimized to access node lists this way. So instead of this:
nl = document.getElementsByTagName("P"); for (var i = 0; i < nl.length; i++) { p = nl[i]; }
Do this:
for (var i = 0; (p = document.getElementsByTagName("P")[i]); i++)
In most cases, this is faster than caching the NodeList. In the second example, the browser doesn’t need to create the node list object. It needs only to find the element at index i at that exact moment.
Use Object Literals
Object literals work like array literals by assigning entire complex data types to objects with just one command. So instead of this:
car = new Object(); car.make = "Honda"; car.model = "Civic"; car.transmission = "manual"; car.miles = 1000000; car.condition = "needs work";
Do this:
car = { make: "Honda", model: "Civic", transmission: "manual", miles: 1000000, condition: "needs work" }
This saves space and unnecessary DOM references.
Local Optimizations
Okay, you’ve switched to a better algorithm and revamped your data structure. You’ve refactored your code and minimized DOM interaction, but speed is still an issue. It is time to tune your code by tweaking loops and expressions to speed up hot spots. In his classic book, Writing Efficient Programs (Prentice Hall, 1982), Jon Bentley revealed 27 optimization guidelines for writing efficient programs. These code-tuning rules are actually low-level refactorings that fall into five categories: space for time and vice versa, loops, logic, expressions, and procedures. In this section, I touch on some highlights.
Trade Space for Time
Many of the optimization techniques you can read about in Bentley’s book and elsewhere trade space (more code) for time (more speed). You can add more code to your scripts to achieve higher speed by “defactoring” hot spots to run faster. By augmenting objects to store additional data or making it more easily accessible, you can reduce the time required for common operations.
In JavaScript, however, any additional speed should be balanced against any additional program size. Optimize hot spots, not your entire program. You can compensate for this tradeoff by packing and compressing your scripts.
Augment Data Structures
Douglas Bagnall employed data structure augmentation in the miniscule 5K chess game that he created for the 2002 5K contest (http://www.the5k.org/). Bagnall used augmented data structures and binary arithmetic to make his game fast and small. The board consists of a 120-element array, containing numbers representing either pieces, empty squares, or “off-the-board” squares. The off-the-board squares speed up the testing of the sidespreventing bishops, etc., from wrapping from one edge to the other while they’re moving, without expensive positional tests.
Each element in his 120-item linear array contains a single number that represents the status of each square. So instead of this:
board=[16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,2,3,4,5,6,2,3,4,5,16,....]
He did this:
bstring="ggggggggggggggggggggg23456432gg11111111gg0000 ... g"; for (z=0;z<120;z++){ board[z]=parseInt(bstring.charAt(z),35); }
This base-35 value represents the squares on the board (parseInt using a radix of 35). As alpha “g” corresponds to 16 (the 5th bit; that is, bit 4), Bagnall says he actually could have used base-17 instead of 35. Perhaps this will leave room for future enhancements.
Each position on the board is encoded like this:
bit 4 (16): 0 = on board, 1 = off board. bit 3 (8): 0 = white, 1 = black. bits 0-2(7): 0 = empty, non-zero = the piece type: 1 - pawn 2 - rook 3 - knight 4 - bishop 5 - queen 6 - king
So to test the color of a piece, movingPiece, you’d use the following:
ourCol=movingPiece & 8; // what color is it? 8=black, 0=white movingPiece &= 7; // now we have the color info, dump it. if(movingPiece > 1){ // If it is not a pawn.
Bagnall also checks that the piece exists (because the preceding code will return white for an empty square), so he checks that movingPiece is non-empty. To see his code and the game in action, visit the following sites:
-
http://halo.gen.nz/chess/main-branch/ (the actual code)
Cache Frequently Used Values
One of the most effective techniques you can use to speed up your JavaScripts is to cache frequently used values. When you cache frequently used expressions and objects, you do not need to recompute them. So instead of this (see Listing 10.5):
Listing 10.5 A Loop That Needs Caching and Fewer Evaluations
var d=35; for (var i=0; i<1000; i++) { y += Math.sin(d)*10; }
Do this (see Listing 10.6):
Listing 10.6 Caching Complex Calculations Out of a Loop
var d=35; var math_sind = Math.sin(d)*10; for (var i=0; i<1000; i++) { y += math_sind; }
Because Math is a global object, declaring the math_sind variable also avoids resolving to a global object for each iteration. You can combine this technique with minimizing DOM interaction by caching frequently used object or property references. Simplify the calculations within your loops and their conditionals.
Store Precomputed Results
For expensive functions (like sin()), you can precompute values and store the results. You can use a lookup table (O(1)) to handle any subsequent function calls instead of recomputing the function (which is expensive). So instead of this:
function foo(i) { if (i < 10) {return i * i - i;} }
Do this:
values = [0*0-0, 1*1-1, 2*2-2, ..., 9*9-9]; function foo(i) { if (i < 10) {return values[i];} }
This technique is often used with trigonometric functions for animation purposes. A sine wave makes an excellent approximation of the acceleration and deceleration of a body in motion:
for (var i=1; i<=360; i++) { sin[i] = Math.sin(i); }
In JavaScript, this technique is less effective than it is in a compiled language like C. Unchanging values are computed at compile time in C, while in an interpreted language like JavaScript, they are computed at runtime.
Use Local versus Global Variables
Reducing the scope of your variables is not only good programming practice, it is faster. So instead of this (see Listing 10.7):
Listing 10.7 Loop with Global Variable
function MyInnerLoop(){ for(i=0;i<1000;i++); }
Do this (see Listing 10.8):
Listing 10.8 Loop with Local Variable
function MyInnerLoop(){ for(var i=0;i<1000;i++); }
Local variables are 60 percent to 26 times faster than global variables for tight inner loops. This is due in part to the fact that global variables require more time to search up the function’s scope chain. Local variables are properties of the function’s call object and are searched first. Netscape 6 in particular is slow in using global variables. Mozilla 1.1 has improved speed, but this technique is relevant to all browsers. See Scott Porter’s local versus global test at http://javascript-games.org/articles/local_global_bench.html.
Trade Time for Space
Conversely, you can trade time for space complexity by densely packing your data and code into a more compact form. By recomputing information, you can decrease the space requirements of a program at the cost of increased execution time.
Packing
Packing decreases storage and transmission costs by increasing the time to compact and retrieve the data. Sparse arrays and overlaying data into the same space at different times are two examples of packing. Removing spaces and comments are two more examples of packing. Substituting shorter strings for longer ones can also help pack data into a more compact form.
Interpreters
Interpreters reduce program space requirements by replacing common sequences with more compact representations.
Some 5K competitors (http://www.the5k.org/) combine these two techniques to create self-extracting archives of their JavaScript pages, trading startup speed for smaller file sizes (http://www.dithered.com/experiments/compression/). See Chapter 9, “Optimizing JavaScript for Download Speed,” for more details.
Optimize Loops
Most hot spots are inner loops, which are commonly used for searching and sorting. There are a number of ways to optimize the speed of loops: removing or simplifying unnecessary calculations, simplifying test conditions, loop flipping and unrolling, and loop fusion. The idea is to reduce the cost of loop overhead and to include only repeated calculations within the loop.
Combine Tests to Avoid Compound Conditions
“An efficient inner loop should contain as few tests as possible, and preferably only one.”14 Try to simulate exit conditions of the loop by other means. One technique is to embed sentinels at the boundary of data structures to reduce the cost of testing searches. Sentinels are commonly used for arrays, linked lists, and binary search trees. In JavaScript, however, arrays have the length property built-in, at least after version 1.2, so array boundary sentinels are more useful for arrays in languages like C.
One example from Scott Porter of JavaScript-Games.org is splitting an array of numeric values into separate arrays for extracting the data for a background collision map in a game. The following example of using sentinels also demonstrates the efficiency of the switch statement:
var serialData=new; Array(-1,10,23,53,223,-1,32,98,45,32,32,25,-1,438,54,26,84,-1,487,43,11); var splitData=new Array(); function init(){ var ix=-1,n=0,s,l=serialData.length; for(;n<l;n++){ s=serialData[n]; switch(s){ // switch blocks are much more efficient case -1 : // than if... else if... else if... splitData[++ix]=new Array(); break; default : splitData[ix].push(s); } } alert(splitData.length); }
Scott Porter explains the preceding code using some assembly language and the advantage of using the switch statement:
“Here, -1 is the sentinel value used to split the data blocks. Switch blocks should always be used where possible, as it’s so much faster than an ifelse series. This is because with the if else statements, a test must be made for each “if” statement, whereas switch blocks generate vector jump tables at compile time so NO test is actually required in the underlying code! It’s easier to show with a bit of assembly language code. So an if/else statement:
if(n==12) someBlock(); else if(n==26) someOtherBlock();
cmp eax,12; jz someBlock; cmp eax,26; jz someOtherBlock;
switch(a){ case 12 : someBlock(); break; case 26 : someOtherBlock(); break; }
jmp [VECTOR_LIST+eax];
Next, let’s look at some ways to minimize loop overhead. Using the right techniques, you can speed up a for loop by two or even three times.
Hoist Loop-Invariant Code
Move loop-invariant code out of loops (otherwise called coding motion out of loops) to speed their execution. Rather than recomputing the same value in each iteration, move it outside the loop and compute it only once. So instead of this:
for (i=0;i<iter;i++) { d=Math.sqrt(y); j+=i*d; }
Do this:
d=Math.sqrt(y); for (i=0;i<iter;i++) { j+=i*d; }
Reverse Loops
Reversing loop conditions so that they count down instead of up can double the speed of loops. Counting down to zero with the decrement operator (i–) is faster than counting up to a number of iterations with the increment operator (i++). So instead of this (see Listing 10.9):
Listing 10.9 A Normal for Loop Counts Up
function loopNormal() { for (var i=0;i<iter;i++) { // do something here } }
Do this (see Listing 10.10):
Listing 10.10 A Reversed for Loop Counts Down
function loopReverse() { for (var i=iter;i>0;i--) { // do something here } }
Flip Loops
Loop flipping moves the loop conditional from the top to the bottom of the loop. The theory is that the do while construct is faster than a for loop. So a normal loop (see Listing 10.9) would look like this flipped (see Listing 10.11):
Listing 10.11 A Flipped Loop Using do while
function loopDoWhile() { var i=0; do { i++; } while (i<iter); }
In JavaScript, however, this technique gives poor results. IE 5 Mac gives inconsistent results, while IE and Netscape for Windows are 3.7 to 4 times slower. The problem is the complexity of the conditional and the increment operator. Remember that we’re measuring loop overhead here, so small changes in structure and conditional strength can make a big difference. Instead, combine the flip with a reverse count (see Listing 10.12):
Listing 10.12 Flipped Loop with Reversed Count
function loopDoWhileReverse() { var i=iter; do { i--; } while (i>0); }
This technique is more than twice as fast as a normal loop and slightly faster than a flipped loop in IE5 Mac. Even better, simplify the conditional even more by using the decrement as a conditional like this (see Listing 10.13):
Listing 10.13 Flipped Loop with Improved Reverse Count
function loopDoWhileReverse2() { var i=iter-1; do { // do something here } while (i--); }
This technique is over three times faster than a normal for loop. Note the decrement operator doubles as a conditional; when it gets to zero, it evaluates as false. One final optimization is to substitute the pre-decrement operator for the post-decrement operator for the conditional (see Listing 10.14).
Listing 10.14 Flipped Loop with Optimized Reverse Count
function loopDoWhileReverse3() { var i=iter; do { // do something here } while (--i); }
This technique is over four times faster than a normal for loop. This last condition assumes that i is greater than zero. Table 10.2 shows the results for each loop type listed previously for IE5 on my Mac PowerBook.
Table 10.2 Loop Optimizations Compared
Normal
Do While
Reverse
Do While Reverse
Do While Reverse2
Do While Reverse3
Total time (ms)
2022
1958
1018
932
609
504
Cycle time (ms)
0.0040
0.0039
0.0020
0.0018
0.0012
0.0010
Unroll or Eliminate Loops
Unrolling a loop reduces the cost of loop overhead by decreasing the number of times you check the loop condition. Essentially, loop unrolling increases the number of computations per iteration. To unroll a loop, you perform two or more of the same statements for each iteration, and increment the counter accordingly. So instead of this:
var iter = number_of_iterations; for (var i=0;i<iter;i++) { foo(); }
Do this:
var iter = multiple_of_number_of_unroll_statements; for (var i=0;i<iter;) { foo();i++; foo();i++; foo();i++; foo();i++; foo();i++; foo();i++; }
I’ve unrolled this loop six times, so the number of iterations must be a multiple of six. The effectiveness of loop unrolling depends on the number of operations per iteration. Again, the simpler, the better. For simple statements, loop unrolling in JavaScript can speed inner loops by as much as 50 to 65 percent. But what if the number of iterations is not known beforehand? That’s where techniques like Duff’s Device come in handy.
Duff’s Device
Invented by programmer Tom Duff while he was at Lucasfilm Ltd. in 1983,16 Duff’s Device generalizes the loop unrolling process. Using this technique, you can unroll loops to your heart’s content without knowing the number of iterations beforehand. The original algorithm combined a do-while and a switch statement. The technique combines loop unrolling, loop reversal, and loop flipping. So instead of this (see Listing 10.15):
Listing 10.15 Normal for Loop
testVal=0; iterations=500125; for (var i=0;i<iterations;i++) { // modify testVal here }
16. Tom Duff, “Tom Duff on Duff’s Device” [electronic mailing list], (Linköping, Sweden: Lysator Academic Computer Society, 10 November 1983 [archived reproduction]), available from the Internet at http://www.lysator.liu.se/c/duffs-device.html. Duff describes the loop unrolling technique he developed while at Lucasfilm Ltd.
Do this (see Listing 10.16):
Listing 10.16 Duff’s Device
function duffLoop(iterations) { var testVal=0; // Begin actual Duff's Device // Original JS Implementation by Jeff Greenberg 2/2001 var n = iterations / 8; var caseTest = iterations % 8; do { switch (caseTest) { case 0: [modify testVal here]; case 7: [ditto]; case 6: [ditto]; case 5: [ditto]; case 4: [ditto]; case 3: [ditto]; case 2: [ditto]; case 1: [ditto]; } caseTest=0; } while (--n > 0); }
Like a normal unrolled loop, the number of loop iterations (n = iterations/8) is a multiple of the degree of unrolling (8, in this example). Unlike a normal unrolled loop, the modulus (caseTest = iterations % 8) handles the remainder of any leftover iterations through the switch/case logic. This technique is 8 to 44 percent faster in IE5+, and it is 94 percent faster in NS 4.7.
Fast Duff’s Device
You can avoid the complex do/switch logic by unrolling Duff’s Device into two loops. So instead of the original, do this (see Listing 10.17):
Listing 10.17 Fast Duff’s Device
function duffFastLoop8(iterations) { // from an anonymous donor to Jeff Greenberg's site var testVal=0; var n = iterations % 8; while (n--) { testVal++; } n = parseInt(iterations / 8); while (n--) { testVal++; testVal++; testVal++; testVal++; testVal++; testVal++; testVal++; testVal++; } }
This technique is about 36 percent faster than the original Duff’s Device on IE5 Mac. Even better, optimize the loop constructs by converting the while decrement to a do while pre-decrement like this (see Listing 10.18):
Listing 10.18 Faster Duff’s Device
function duffFasterLoop8(iterations) { var testVal=0; var n = iterations % 8; if (n>0) { do { testVal++; } while (--n); // n must be greater than 0 here } n = parseInt(iterations / 8); do { testVal++; testVal++; testVal++; testVal++; testVal++; testVal++; testVal++; testVal++; } while (--n); }
This optimized Duff’s Device is 39 percent faster than the original and 67 percent faster than a normal for loop (see Table 10.3).
Table 10.3 Duff’s Device Improved
500,125 Iterations
Normal for Loop
Duff’s Device
Duff’s Fast
Duff’s Faster
Total time (ms)
1437
775
493
469
Cycle time (ms)
0.00287
0.00155
0.00099
0.00094
How Much to Unroll?
To test the effect of different degrees of loop unrolling, I tested large iteration loops with between 1 and 15 identical statements for the Faster Duff’s Device. Table 10.4 shows the results.
Table 10.4 Faster Duff’s Device Unrolled
Duff’s Faster
1 Degree
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Total time (ms)
925
661
576
533
509
490
482
469
467
457
453
439
437
433
433
Cycle time (ms)
0.00184
0.00132
0.00115
0.00106
0.00101
0.00097
0.00096
0.00093
0.00093
0.00091
0.00090
0.00087
0.00087
0.00086
0.00086
As you can see in Table 10.4, the effect diminishes as the degree of loop unrolling increases. Even after two statements, the time to loop through many iterations is less than 50 percent of a normal for loop. Around seven statements, the time is cut by two-thirds. Anything over eight reaches a point of diminishing returns. Depending on your requirements, I recommend that you choose to unroll critical loops by between four and eight statements for Duff’s Device.
Fuse Loops
If you have two loops in close proximity that use the same number of iterations (and don’t affect each other), you can combine them into one loop. So instead of this:
for (i=0; i<j; i++) { sumserv += serv(i); } for (i=0; i<j; i++) { prodfoo *= foo(i); }
Do this:
for (i=0; i<j; i++) { sumserv += serv(i); prodfoo *= foo(i); }
Fusing loops avoids the additional overhead of another loop control structure and is more compact.
Expression Tuning
As regular expression connoisseurs can attest, tuning expressions themselves can speed up things considerably. Count the number of operations within critical loops and try to reduce their number and strength.
If the evaluation of an expression is costly, replace it with a less-expensive operation. Assuming that a is greater than 0, instead of this:
a > Math.sqrt(b);
Do this:
a*a > b;
Or even better:
var c = a*a; c>b;
Strength reduction is the process of simplifying expensive operations like multiplication, division, and modulus into cheap operations like addition, OR, AND, and shifting. Loop conditions and statements should be as simple as possible to minimize loop overhead. Here’s an example from Listing 10.10. So instead of this:
for (var i=iter;i>0;i--)
Do this:
var i=iter-1; do {} while (i--);
This technique simplifies the test condition from an inequality to a decrement, which also doubles as an exit condition once it reaches zero.
Miscellaneous Tuning Tips
You can use many techniques to “bum” CPU cycles from your code to cool down hot spots. Logic rules include short-circuiting monotone functions, reordering tests to place the least-expensive one first, and eliminating Boolean variables with if/else logic. You also can shift bits to reduce operator strength, but the speed-up is minimal and not consistent in JavaScript.
Be sure to pass arrays by reference because this method is faster in JavaScript. If a routine calls itself last, you can adjust the arguments and branch back to the top, saving the overhead of another procedure call. This is called removing tail recursion.
For More Information
For more tuning tips, see the following sites:
http://www.cs.bell-labs.com/cm/cs/pearls/apprules.htmlJon Bentley’s rules for code tuning.
http://www.refactoring.com/catalog/Martin Fowler’s catalog of refactoring techniques.
http://home.earthlink.net/~kendrasg/info/js_opt/Jeff Greenburg’s JavaScript speed-optimization tests. http://www.xp123.com/xplor/xp0002d/William Wake’s refactorings from Bentley’s Writing EfficientPrograms.
Flash ActionScript Optimization
Like JavaScript, ActionScript is based on the ECMAScript standard. Unlike JavaScript, the ActionScript interpreter is embedded within Macromedia’s popular Flash plug-in and has different performance characteristics than JavaScript. Although the techniques used in this chapter will work for Flash, two additional approaches are available to Flash programmers. You can speed up Flash performance by replacing slower methods with the prototype command and hand-tune your code with Flasm.
Flasm is a command-line assembler/disassembler of Flash ActionScript bytecode. It disassembles your entire SWF file, allowing you to perform optimizations by hand and replace all actions in the original SWF with your optimized routines. See http://flasm.sourceforge.net/#optimization for more information.
You can replace slower methods in ActionScript by rewriting these routines and replacing the originals with the prototype method. The Prototype site (http://www.layer51.com/proto/) provides free Flash functions redefined for speed or flexibility. These functions boost performance for versions up to Flash 5. Flash MX has improved performance, but these redefined functions can still help.
Summary
To speed execution, optimize your code at the right design level or combination of levels. Start with global optimizations first (for example, algorithm and data structure choices), and then move down toward more local optimizations until your program is fast enough. Refactor to simplify your code, and then minimize DOM interaction and I/O requests. Finally, if all else fails, tune your code locally with the techniques outlined in this chapter. Measure each change, and cool hot spots one at a time. Here is a summary of the optimization techniques discussed in this chapter:
- Avoid optimization if at all possible.
- Optimize globally to locally until the code is fast enough.
- Measure your changes.
- Keep Pareto in mind.
- Cool hot spots one at a time.
- Minimize DOM and I/O interaction (object and property lookups, create and edit subtrees offline).
- Shorten scope chains to maximize lookup speed. Avoid with statements because they extend scope chains.
- Cache frequently used values.
-
Simplify loop conditions, hoist loop-invariant code, flip and reverse, and unroll loops with an optimized Duff’s Device.
- Use local optimizations last.
-
Tune expressions for speed.
Recommended Reading
If you want to learn more about optimizing JavaScript, I recommend these sources:
- Jon Bentley’s Programming Pearls, 2nd ed. (Addison-Wesley, 1999) and More Programming Pearls: Confessions of a Coder (Addison-Wesley, 1988). These books include many examples of code tuning and recap the 27 code-tuning rules in his out-of-print classic, Writing Efficient Programs.
- Brian Kernighan and Rob Pike’s The Practice of Programming (Addison-Wesley, 1999) describes best programming practices, including Chapter 7 on performance.
- Donald Knuth’s The Art of Computer Programming series (Addison-Wesley, 1998).
- Steve C. McConnell’s Code Complete: A Practical Handbook of Software Construction (Microsoft Press, 1993), especially Chapters 28 and 29.
Footnotes:
[ref 1] Jon Bentley, Programming Pearls, 2d ed. (Boston, MA: Addison-Wesley, 1999).
[ref 2] Brian W. Kernighan and Rob Pike, The Practice of Programming (Boston, MA: Addison-Wesley, 1999). See the “Performance” chapter, 165-188.
[ref 3] Bentley, Programming Pearls, 7. The space-time tradeoff does not always hold. The ideal situation is mutual improvement. Bentley found that often “reducing a program’s space requirements also reduces its run time.”
[ref 4] Mozilla.org, “Rhino: JavaScript for Java” [online], (Mountain View, CA: The Mozilla Organization, 1998), available from the Internet at http://www.mozilla.org/rhino/.
[ref 5] Geoffrey Fox, “JavaScript Performance Issues,” Online Seminar, Northeast Parallel Architectures Center [online], (Syracuse, NY: Syracuse University, 1999), available from the Internet at http://www.npac.syr.edu/users/gcf/forcps616javascript/msrcobjectsapril99/tsld022.htm. According to Fox, JavaScript is about 5,000 times slower than C, 100 times slower than interpreted Java, and 10 times slower than Perl.
[ref 6] Bentley, Programming Pearls.
[ref 7] Martin Fowler, Refactoring: Improving the Design of Existing Code (Boston, MA: Addison-Wesley, 1999).
[ref 8] Vilfredo Pareto, Cours d’économie politique professé à l’Université de Lausanne, 2 vols. (Lausanne, Switzerland: F. Rouge, 1896-97).
[ref 9] Barry W. Boehm, “Improving Software Productivity,” IEEE Computer 20, no. 9 (1987): 43-57.
[ref 10] Barry W. Boehm and Philip N. Papaccio, “Understanding and Controlling Software Costs,” IEEE Transactions on Software Engineering 14, no. 10 (1988): 1462-1477.
[ref 11] Donald E. Knuth, “An Empirical Study of FORTRAN Programs,” SoftwarePractice and Experience 1, no. 2 (1971): 105-133. Knuth analyzed programs found by sifting through wastebaskets and directories on the computer center’s machines.
[ref 12] Kernighan and Pike, The Practice of Programming, 41.
[ref 13] Andrew Hunt and David Thomas, The Pragmatic Programmer: From Journeyman to Master (Boston, MA: Addison-Wesley, 1999), 179.
[ref 14] Bentley, Programming Pearls, 192.
[ref 15] Scott Porter, email to author, 16 July 2002.
Andy King, author of the popular book titled “Speed Up Your Site Web Site Optimization”. Web Site Optimization, LLC is a leading provider of web site optimization and search engine marketing services that “tune up” web sites for increased usability, conversion rates, traffic, and profitability. For more information about Web Site Optimization visit http://www.websiteoptimization.com