Project "HoT MetaL JaZz" - Part 4

In previous editions of this series, we went over some of the basic functions of the HTML5 Canvas element, including plotting paths, drawing images, and manipulating drawing parameters. Now, we will go over a very simple game framework written in JavaScript that uses HTML5 Canvas elements to display visual output.

The framework I created demonstrates a lot of tricks, as well as basic functions common in games. Features include:

  • Frame-based sprite animation
  • Sprite scaling, rotation, and transparency
  • Keyboard input management
  • Ability to scale to browser size while maintaining original aspect ratio

Furthermore, I've added functions such as a "wait until loaded" feature that ensures the game doesn't start until all related assets have been fully loaded. It currently has no progress indicator (just a static "Loading..." message), so you may want to add one of those if you decide to use any of the framework for your own games.

Now, let's go over some of the highlights of the framework for HoT MetaL JaZz, beginning with the front-end: "hotmetal.html".


When hotmetal.html loads, it runs the script function "startGame()", which is embedded right into the HTML page. Within the "startGame()" function, we create a new "GameCore" object, the constructor for which takes several parameters. Let's go over those quickly:

The first argument is the primary canvas - what the player will see. The next two arguments specify a maximum width and height for the primary canvas - this is mainly to ensure that the canvas doesn't get so large that performance begins to degrade too far. The next argument assigns the backbuffer canvas. The next two arguments are the width and height of the backbuffer - this value does not change; instead, this is the "natural" size of the backbuffer. The final argument is the desired frames-per-second timing at which to run the game.

Also, the body element of the page is given a function to run when the browser is resize: "game.resize()". We will go over this function as we go through the "gamecore.js" file.


The constructor for the GameCore object is fairly straightforward - we assign values, and run a couple of functions.

We begin by setting some variables to keep track of our canvases and their sizes. Then we call "this.resize()" to size the canvas for a "best fit" within the browser's viewing area. We also assign a GameInputManager to the GameCore. As for the logic of GameCore, we use a demonstration logic script called "HotMetalLogic" - this contains all the code specific to how the actual game functions.

After this is done, we call "this.init()". Here, we set up the contexts for our canvases and save them. We clear both contexts, and draw a simple loading screen. At the end of the method, we call "this.loadData()".

The GameCore.loadData() method simply offloads the data-loading tasks to the logic part of the game. It will then run "this.assertReady()", which will cause the game to wait until the logic is fully loaded and ready.

The GameCore.assertReady() function will call the logic script's "images.loadImages()" function. The logic.images object is an instance of GameImageManager, and its "loadImages()" function will (re)assign the URL to each image loaded by the logic script. The function sets the GameImageManager.ready property to "true" if all images have finished loading. GameCore.assertReady will start running the logic (this.logic.run()) if the GameImageManager.ready property is set to true; otherwise, it will wait 1000 milliseconds (1 second) and run the check again. This will continue until the GameImageManager lets us know it is ready.

GameCore.resize() is an interesting function. It will find the "best fit" for the game's primary canvas within the browser window, all while maintaining the original aspect ratio of the canvas. First, we get the width-wise aspect ratio, and the "reverse", height-wise aspect ratio. We query the browser's "inner" window width and height, and then run a few checks to make sure the resize fits properly based on the aspect ratios. When all is said and done, you can resize your browser any way you want and have the primary canvas fit inside - with no portion of the primary canvas stretching past the window boundaries.


The GameInputManager object is quite simple - it sets up some functions for the document to run whenever a key is pressed or released. It also will release all input whenever the document loses keyboard focus via the document.onblur event. To determine whether a particular key is pressed, just check the GameInputManager.keyControls[keyCode] variable - it will be "true" if the key with the specified keyCode is pressed; otherwise, it will return "false" or "undefined".


This file contains two object constructors. The first is GameImageManager; the second is TaggedImage.

GameImageManager sets up an array of TaggedImage objects, which stores each image inserted into the manager along with a string ID for each. To begin adding images to the GameImageManager, use the function GameImageManager.addImage(filename, tag). The filename is the URL of the image to load; tag is a string identifier used to store and access the image. For example, you can use the code:

myImageManager.addImage("images/my-image.png", "sprite");

The file will be associated with the string "sprite". Thus, to retrieve the image from the GameImageManager instance, you would use:

var img = myImageManager.getImage("sprite");

Finally, the "loadImages()" function will check the "complete" status of each image loaded into the GameImageManager. If it finds that all images have a "complete" status, it will set the GameImageManager's "ready" flag to true.


This file contains three object constructors: GameSprite; GameSpriteProperty; and GameSpriteAnimation.

GameSprite.update(elapsedTime) will begin its task by checking whether the sprite is animated - if so, it calls the update(elapsedTime) function of the animation sequence. It continues by ensuring the sprite speed hasn't exceeded the designated maximum, and then updating the sprite's position by factoring its velocity in.

GameSprite.draw(context) takes the backbuffer context as its sole argument. The function sets the appropriate globalAlpha, translation, rotation, and scale of the context to match those of the sprite, and draws the sprite to the context. If the sprite is animated, it will draw the current frame; otherwise, it will draw the image parameter used in the GameSprite's constructor.

GameSprite.setProperty(name, value) and GameSprite.getProperty(name) allow the sprite to have an arbitrary set of properties. For example, you can add a "Hit Points" property to the sprite, without having to write a new sprite class. The GameSpriteProperty class simply holds a name for the property and its value.

The GameSpriteAnimation(image, width, height, delay) constructor takes an image, divides it into "cells" of the given width and height, and delays frame transitions by "delay" milliseconds. The update function simply checks whether it needs to transition to the next frame, by comparing the current frame time with the delay of the transition. If it's time to transition, the frame index is incremented. If the frame index has gone past the last frame index, the animation "loops" back to the first frame.


The HotMetalLogic class is the "glue" that ties all the previous classes together. It uses a GameImageManager to load and assign images, it interacts with sprites using the GameInputManager (use the arrow keys to move around), and handles the AI for different sprites.

When playing the game, you need to avoid the fireballs shooting out of the flares on the four corners of the play area. As the flares shoot out fireballs, they gradually weaken until they disappear. When all flares and fireballs have been extinguished, the game ends. You lose points each time a fireball hits you.

Of course, this is just a modest demo of what is possible. There are better ways to set up a framework for a game - this is just to get your creative juices flowing.

Some suggestions for improvements:

Make the GameCore capable of holding an array of logic scripts, and switch between them. For example, run a logic script for a main menu, and another logic script for the game in action.

Sort sprite drawing order by Y: The farther from the bottom of the canvas a sprite is, the sooner it should be drawn, so that sprites appear behind or in front of their appropriate peers.

How did they do it? - Al Bhed

I am introducing a new series I call “How did they do that?” Basically what is going to happen here is I am going to tackle some...odd, interesting or complicating aspects from different games. I will explain what it is and I will use examples on where it is used and attempt to mimic it.

Note: This will be based on mechanics, for example, stats, formulas, stuff like that. So don’t expect something outrageous such as importing models or special effects.

Note: A code sample is attached at the bottom of the article. I wouldn’t want people to use all their brain power trying to understand my theory and translating it to code when there is a downloadable available. I will try to put all the stuff in classes. That way, you don’t have to know the code or even know how it works. You just have to know how to use it. However, reading it will be an interesting learning experience.

The first article will be on Al Bhed.

Al What?
Al Bhed, it is a fictional language used in Final Fantasy X (FFX) and FFX-2. What happens is there is another language, obviously which you do not understand. The minigame is that you have the opportunity to find these “letters” throughout the game. The more you find, the more you understand. If you want to know more about this, go here. Each letter is represented by another character.

Simple concept right?

Stuff to note:
This is important if you want to make a new language. The words have to be pronounceable. For example, if you say “Hello” in English it turns out as “Rammu” in Al Bhed. It is pronounceable. There is a simple reason for that. You will notice that the vowels, and Y, is mixed with each other. This ensures that all words that are pronounceable in English is pronounceable in Al Bhed.

So How does it work?
Obviously I can’t tell you how Square Enix did it simply because I do not know. I do however have a method that you can use.

#1. Create an array with space for 26 characters. It will represent the letters a to z. It should be a Boolean so that you can simply check true or false.

When we come to a later part of the code, you will have to check if the character has received the letter. Otherwise the code will blindly convert to Al Bhed without caring if you know the letter or not.

#2. Make a function to modify the above array to a true/false.

Same as above, if you do not ever check that they indeed do have the letter, the code will convert without taking the collected letters into consideration.

#3. Make a function to convert each individual letter from English to Al Bhed.

Obviously to translate the letters.

Why only one way?
You only need it one way. It only has to be translated to one language. If you want to do it both ways by all means. It just makes more sense for the programmer to type in the message in English and have it converted to Al Bhed.

Why each individual letter? Why not just use a replace function?
The problem when converting all the letters using a replace function is that you have less control over it. That’s not the big problem. Consider the string below.

Hello, I am converting text.

Suppose you convert this to Al Bhed using a replace function. You start by converting all the a’s:
Hello, I ym converting text.

Everything is still fine. Now you get to the e’s:
Hallo I ym convarting taxt.

Everything is still fine. Now you continue this until you get to the I’s:
Hallo e ym convarteng taxt.

Perfect. Now you get to the O’s:
Hallo e ym cunvarteng taxt.

Still great. This is where the problem comes in. Now you go down the alphabet converting everything, now you are with the U’s.
Hallo e ym cinvarteng taxt.

Something isn’t right here. The original text does not have U’s, but because the converted text, from O’s, created a U, the text converted the already converted U. So in the end, you have an inaccurate translation. This is even worse with the consonants. That is why you convert the word one letter at a time, from left to right.

#4. Go through the string, checking letters one by one, converting them provided the array in #1 representing the current letter is set to true.

The idea is that the words should not make sense. Only “obtained” letters should be translated. That way, the more letters they get the more they understand. If the array value returns false, make the PC show Al Bhed, if it return true, make it show the English translation. So if only certain letters are found, only certain pieces of a word would be in English.

Eg. When translating “Hello”, but you only have the “H” and the “L” letters, the word should be displayed like so:
Usually the letters you have are displayed in a different color.
So, if the user has the letter, simply don’t convert it.

#5. Return the converted string.

How else are the people suppose to read the converted string?

Problems you will face
If you just blindly convert everything you will run into problems.
If I want to convert this:

Hey! Zappy77! Watch out for Pinky! My pet mouse!

Problem #1: You will not display the punctuation, so have to make the computer check for it.

My method is simple, if the letter is not in the list, simply write the input letter. That way you don’t have to check for a “!”, a “,”, etc.

Problem #2: If you are like me, you probably would code the engine to search for letters from A – Z, all in capital letters. You will not specify the lower case letters. If that is the case, the lower case letters will not be converted.

Convert the inputted letter to uppercase before processing.

Problem #3:
Assuming you had Problem #2, the solution will create another problem. When the string is returned, the whole string will be in uppercase, which put a little more strain on the readers eyes. This is not good.

Simple really, before you add it to the main string, check from the buffer if it was originally upper or lower case. If it was lower case, simply convert to lower case. If it was upper case, simply add it to the string.

Problem #4:
If you convert the above string, the output would be:
Rao! Wybbo77! Fydlr uid vun Behgo! So bad suica!
That’s bad! Why? Zappy77 and Pinky are proper nouns. Meaning they are names. You cannot convert names, that would be stupid. Nobody will know who they are referring to.

If there is a place called “Death Cage”, would you want the player to know “Death Cage” (Which is a name) or should they read: “Taydr Lyka” and not know what the places names are?


I wrote a simple text parser. What I do is if I don’t want a part converted, I simply put [square brackets] around it. The script will then ignore the conversion words between square brackets and simply remove the square brackets.

If you cannot write a function like this yourself, simply refer to my included source code, or study this sample extracted from my source:

a = modifier.find("[");
b = modifier.find("]");

    for (i=0;i <= len ;i++)
        if (i >= a && i <= b)
            if (buffer[i] != '[' && buffer[i] != ']')
                albhed += buffer[i];

        if (i>a)
            a = modifier.find("[",i-1);

        if (i>b)
            b = modifier.find("]",i+1);

This obviously makes more sense in the code itself.
Anyway, you can find the source as well an example of using the code here.

So that’s it for the first article of the series of “How did they do it?”. If you find any bugs in the code, please notify me. This is a full blown Al Bhed engine, but might have 1 or 2 bugs. I wrote this code in about an hour so don’t expect it to be perfect.


Starting out with 3D #5 (U - Z)

This is the final piece of  the 3D terminology. *Thank goodness*. If you followed all of these tutorials you should know the drill by now. If not, best read the other parts if you would like to know more!

The other parts can be found here:

Part 1: A - E
Part 2: F - L
Part 3: M - P
Part 4: Q - Z
UV Texture Co-ordinates
The co-ordinate system used for assigning textures. UV co-ordinates space is 2D, thus a projection method must be used to “unwrap” the UV’s from the model and then lay them on a flat plane. This plane can then be copied into a paint package to manipulate to finally add the model texture.

The region of the 3D scene that is displayed to the artist. For example, from the top.

Volumetric lights are lights which can be view in the 3D space rather than on a flat surface. Just like that, volumetric textures are textures applied throughout a volume space rather than a surface.

The process of determining which bone in a skeleton affects which part of the models surface. In a lot of cases, the influence is simply painted on the model.

A shading method in which lines are displayed to represent the models form.

The distance a point or surface lies in the scene. Z-depth is used to calculate where a light casts shadows and also which surfaces are actually visible.

If you know every single one of these terms, you are amazing! If you learnt new terms great! If you learnt so much and you are so pumped about these articles and you are going to tell all your friends to visit this website, best of all!

Let me just say, don’t feel like you should know all of this off by heart, as long as you have a basic idea of most of them you will do fine!

Thank you for reading, and as I promised, here is a downloadable PDF file of everything in the past 5 articles, for future references.
Note: Please excuse the messiness. The converter did a pretty bad job in converting the pages. At least the information is there right?

Starting out with 3D #4 (Q - T)

Picking up where we last stopped. Here are the words between Q and T.
The others can be found here:
Part 1: A - E
Part 2: F - L
Part 3: M - P

Quad View
A method of displaying 4 viewports for you to view your model. The standard viewports are usually top, front, side and perspective.

A technique for rendering scenes. Raytracing traces the path of every single light from its source until it leaves the scene or is to dim to be visible in the current image.

Reflection Map
An environment map used to simulate real world reflection effects. They render quicker than other methods such as raytracing.

The process of converting the 3D data stored in a software package into the two-dimensional image “seen” by the camera within the scene. Rendering brings together the scene geometry, Z-depth, surface properties, lighting set-up and rendering method to create a finished frame.

When preparing a 3D model for animation, you usually add in an underlying skeleton. This makes everything easier to animate. This skeleton is linked to the model. That process is known as rigging. When this process is completed, you refer to it as a rigged character.

A set of 3D objects. This includes the models themselves, lights, cameras etc.

The process of calculating how the model’s surface should react to light.

When binding the surface of a model to the skeleton during character rigging.

A network of bones used to define and control the motion of a model during character animation. Moving a bone causes the mesh of the model to move and deform.

The automatic alignment of one object to another or to a reference grid. This is used when extreme precision has to be taken into account.

Soft Body Dynamics
Simulates the behaviour of models that deforms when they collide with other objects. Such as a cloth on a table.

A property which determines the way in how highlights appear on the specified surface.

A modeling option which duplicates the model across a specified axis. This is used a lot for organic modeling as only one half of the model has to be modeled.

An image that is applied to a 3D model to give it detail. They can be photographs or CGI. They can be applied to each of the material channels.

The process by which NURBS surfaces are edited. This allows 3D artists to define areas that will be made invisible, and also not render them. Theoretically they still exist and you can also still edit them if you please. This is used to get rid of pieces in the model that will never be seen.

In the next part, I will finish up with the common 3D terms, I will also add a downloadable PDF document with all of these words if you wish to download them for future references.

Starting out with 3D #3 (M - P)

This is the follow up from #1, which can be found here, and #2, which can be found here
We are almost finished! Bare with me!

Mathematical attributes that determines the ways in which the model will react upon light.

The surface geometry of a 3D model.

Metalball modeling
A technique in which models are created using spheres that attract and cling to each other according to their proximity to one another and their field of influence. This technique is mostly used when creating organic models.

As a verb it means to build a 3D object. As a noun it is referring to the end result of a 3D object.

A modeling tool which deforms the structure of an entire object. Eg. Lathe

Multi-pass rendering
To render out the lighting or surface attributes of a scene as separate images, with the idea to put them together at a later stage. This technique can be used to speed up rendering or in order to develop the look of a scene by compositing the different passes together in various permutations.

An imaginary line drawn from the centre of a polygon at right angles to the surface.

A point in a scene that does not render, but instead is used as a reference.

Stands for Non-Uniform Rational B-Splines. NURBS curves are two-dimensional curves whose shape is determined by a series of control points between which they pass. When a series of such curves are joined together, they form a 3D NURBS surface.  NURBS are commonly used to model organic curved-surface objects.

Anything that can be inserted and manipulated in a 3D scene. It can be lights, models, particles, cameras etc.

An area of a NURBS surface enclosed by a span square: the shape created by the intersection of four isoparms, two in the U direction, and two in the V direction.

A flat, 2D surface. This can be used for modeling or for references, depending on the final goal.

A one-dimensional point in coordinate space. Points can be linked up to form polygons, used as control vertices for NURBS curves.

Geometry formed by connecting 3 or more points. This is why in 2D you work with squares, but triangles in 3D.

A simple 3D form often used as a basis when modeling something else. Examples include cubes, planes, spheres etc.

The next part goes from Q to T.

Starting out with 3D #2 (F - L)

This section builds on #1, which can be found here.

Simply put, the shape of which a 3D object has been extruded. The front or back of an extruded object. Any of the polygons which makes up it’s boundaries.

Fall off
The way in which the intensity of a light diminishes with the distance from its source. In the real world, this is calculated by the inverse square law, which states that the intensity is inversely proportional to the square of the distance. In other words, the further the light, the darker the object. Real world examples would be the sun, a light, etc.

Forward Kinetics (FK)
A character animation technique for controlling the motion of bones in a chain. For example, the limbs. The opposite of this is inverse kinetics. (See further down)

Global Illumination
A type of rendering that computes all the possible light interactions between surfaces in a given scene.
Grime(Dirt) map
2D images applied to a particular channel of material. When the image is projected across the surface of an object, it breaks up the channels flat, even value, creating realistic variations.

Hard Body Dynamics
Also known as rigged body dynamics. It simulates the physical behaviour of rigged object that do not deform on collision.

Hardware Rendering
Renders a preview of a 3D scene providing real-time feedback. To make this possible it removes certain processor-intensive effects such as volumetrics, shadowing and realistic refraction.

The relationship of the sub objects within a model or scene to one another. Sub-objects may exist as
parents, children or independents. A parent object controls the motion of all child objects linked to it. The motion of a child object does not affect its parent.

When a 3D package calculates the in between position between two keyframes.

Inverse Kinematics (IK)
Inverse Kinematics is a character animation technique in which the end bone of a chain is
assigned a goal object. When the goal object moves, the bone moves with it, dragging the rest of the chain behind it. For example, when moving a hand up, the whole arm moves up with it. The reverse is Forward Kinematics.

Points between the bones in a character rig.

An image used as a reference point for animation. An image is set up at a certain time in an animation and a later time and have the computer calculate all the frames between the two reference images. (See Interpolation)

A modeling technique in which a two-dimensional profile is duplicated in rotation around a reference axis. The duplicates then join up to create a continuous three-dimensional
surface. Lathing is particularly useful for creating objects with rotational axes of symmetry, such as plates, glasses, vases or wheels.

A technique in which a continuous three-dimensional surface is created by selecting and joining multiple two-dimensional cross sections or profiles. You basically have a “path” and an “object” the object will follow the path.

Low Poly Modeling
When creating a simplified model with a low polygon amount for realtime use, such as games. A few years ago anything with a polygon count of 512 and under was considered low poly. Today, anything well over a few thousand can still be rendered in realtime.

In part 3 I am going through the words between M and P.

Project "HoT MetaL JaZz" - Part 3

In our last episode, we demonstrated how to draw images to our canvas. Now we will dig deeper into the HTML5 Canvas drawing methods by looking at the path-drawing methods, and learn how to scale images.

Drawing straight lines is easy. We start with [context].beginPath(), plot our paths, apply a stroke and/or fill, then call [context].closePath() to finish up. For example, to draw a simple triangle, you could use code like the following. Open gamecore.js, and replace your current draw() method with the following:

GameCore.prototype.draw = function() {
  this.bufferContext.fillStyle = "rgb(0, 0, 0)";

  this.bufferContext.moveTo(100, 100);
  this.bufferContext.lineTo(200, 100);
  this.bufferContext.lineTo(100, 200);
  this.bufferContext.lineTo(100, 100);

  this.bufferContext.fillStyle = "rgb(127, 127, 127)";
  this.bufferContext.strokeStyle = "rgb(255, 255, 255)";



  setTimeout("game.draw()", this.frameSpeed);

The first difference is in the first line - we set the fillStyle to black explicitly, to ensure that we always get a black fill since we'll be changing the context's fillStyle in this example.

Then comes the call, [context].beginPath(). This simple call is used to indicate that we are starting to plot a path. After that, we use [context].moveTo() to move our "cursor" to the indicated X/Y coordinate, without plotting any paths. Three lines calling [context].lineTo() plot the path for a triangular shape. We set our fillStyle to gray, call [context].fill(), and this fills our triangle shape with the color gray. We then set our strokeStyle to white, and call [context].stroke() to draw the lines between points. A call to [context].closePath() lets the context know that this path is complete.

Of course, we can draw curves fairly easily, too. We can draw a triangle that looks "pinched", with each side bending inward toward the center of the shape. To do this, we replace our [context].lineTo() calls with the following:

this.bufferContext.quadraticCurveTo(125, 125, 200, 100);
this.bufferContext.quadraticCurveTo(125, 125, 100, 200);
this.bufferContext.quadraticCurveTo(125, 125, 100, 100);

Look out, those are some sharp points on that triangle!

Quadratic curves are simple: The first two arguments define the coordinate to which the middle of the line will be "pulled", and the second pair of arguments define that start and end points of the path.

Of course, bezier curves are no problem for us, either. Change the code for drawing your triangle to the following:

this.bufferContext.bezierCurveTo(250,  50,  50,  50, 200, 100);
this.bufferContext.bezierCurveTo(125, 275, 275, 125, 100, 200);
this.bufferContext.bezierCurveTo( 50,  50,  50, 250, 100, 100);

The main difference between quadratic curves and bezier curves are that quadratic curves have one "pull" point, while bezier curves have two. Once you master these curves, you can make some very intriquing designs.

As a side note, you can assign an image as a fillStyle for a path. Doing so is quite simple, involving a call to [context].createPattern(). The function takes two arguments: The first is the image you want to use, and the second is a string that determines the pattern of repetition. Valid values for this argument are "repeat", "repeat-x", "repeat-y", and "no-repeat".

There are some other path-plotting functions: arcTo(float x1, float y1, float x2, float y2, float radius); and arc(float x, float y, float radius, float startAngle, float endAngle, boolean anticlockwise). Experiment with these to learn how to create things like pie charts, etc. Oh, and don't forget the rect(float x, float y, float width, float height) function!

Before we move on to scaling our game canvas, let's touch on one last detail about paths - there is a function, [context].isPointInPath(float x, float y) that will tell you whether the specified point is contained within the path you are currently plotting. It returns true if the point is contained within the path, false if otherwise. Use this for collision detection against complex shapes.

Now that you have a good understanding of paths, we can move on to something a little different. Let's say you want to scale your game canvas to fit the size of the browser window. Furthermore, you want to scale the image with the canvas, so that players with bigger screens see the same amount of view area, just stretched to fit.

If that's something you're interested in, you're in luck - it's pretty easy to do this, so let's get to work on that. Firstly, go back to your skeleton HTML file. We will generally leave this alone for the most part, but there are times where it will need a slight tweak. In the <body> tag, add the following attribute shown in boldface:

<body onload="startGame();" onresize="game.resize();">

This makes the browser call our GameCore's .resize() function, which is:

GameCore.prototype.resize = function() {
  var winInnerWidth = window.innerWidth;
  var winInnerHeight = window.innerHeight;

  this.primaryCanvas.width = winInnerWidth;
  this.primaryCanvas.height = winInnerHeight;

  this.primaryCanvasWidth = this.primaryCanvas.width;
  this.primaryCanvasHeight = this.primaryCanvas.height;

This function simply resizes the primary canvas, without touching the buffer canvas. Thus, we can keep the same amount of data on-screen independent of the size of the game's primary canvas. The buffer just "stretches" to the size of the primary canvas.

Remember that this function is called whenever the window is resized, so you may want to add an extra call to the GameCore.resize() method at the end of your GameCore.init() method, just before the call to GameCore.loadData().

Here's an exercise: Add a minimum and maximum size to which the primary canvas will scale. For extra credit, have the primary canvas retain its original aspect ratio. Hint: The aspect ratio would be the canvas width divided by the canvas height.

That about does it for laying the groundwork. In the next section, we'll begin to take what we've learned to help us build a simple framework for our game. In the meantime, there's a really nice "HTML5 Canvas Cheat Sheet" you should definitely check out.

Until next time, have fun!

Project "HoT MetaL JaZz" - Part 2

In the previous installment of this series, we added code to our project that showed us how to draw text to our canvas and manipulate it a bit. Now, we will learn more about the drawing functions within the HTML5 canvas element, including image-drawing capabilities.

Let us begin this exercise by modifying our base template for HTML5 games. Everything will be similar to the skeleton presented in Part 0 of our series, except that we will have a separate file for our game scripts.

The new skeleton code is as follows:

  <title>HoT MetaL JaZz - Rev.2</title>

  <script src="gamecore.js"></script>

    var game;
    function startGame() {
      game = new GameCore(document.getElementById("gamecanvas"),

  <style type="text/css">
    body { margin: 0 auto; text-align: center; }

<body onload="startGame();">
  <canvas id="gamecanvas" width="300" height="300"></canvas>
  <canvas id="backbuffer" width="300" height="300" style="display:none;"></canvas>

Save the code into a plain text file, with a file name ending with ".html". This simple HTML file will allow us to write as much game code as we want without touching the "front-end" HTML file, with the exception of adding new script file definition in the <head> area.

This skeleton code works with the script file "gamecore.js". To add this file, create a new, empty text file to the same folder as your HTML file, and name it "gamecore.js". We will use this new file to store the "core" scripts for our game.

We will define a GameCore class in gamecore.js. Its constructor will simply take a primary canvas, a buffer canvas, and a "desired" framerate as arguments. It will then store references to the canvas elements and do a few other basic setup tasks. Then, it will automatically call a function to initialize the game. Take the following code:

function GameCore(primaryCanvas, bufferCanvas, desiredFramerate) {
  this.primaryCanvas = primaryCanvas;
  this.bufferCanvas = bufferCanvas;

  //store the width and height of each canvas to reduce
  //the overhead involved in individual queries to canvas elements
  this.primaryCanvasWidth = primaryCanvas.width;
  this.primaryCanvasHeight = primaryCanvas.height;
  this.bufferCanvasWidth = bufferCanvas.width;
  this.bufferCanvasHeight = bufferCanvas.height;

  //To get a framerate of N frames per second, divide 1000 by N
  this.frameSpeed = 1000 / desiredFramerate;

  this.images = new Array();


As you can see, there isn't much going on here that we haven't covered. After we reference the Canvas objects and make variables to store the context for each canvas, we store the width and height of each. We have a "frameSpeed" variable, which sets a timeout between refreshes - timeouts are measured in milliseconds (1/1000th of a second), so we get our actual frames-per-second delay by dividing 1000 by our desiredFramerate. The desiredFramerate should be a reasonable value - no more than 60, but in practice, about 30. Then we create a new array to store the images we will use for this example.

At the end of the GameCore constructor, we make a call to the init() function of our GameCore object. For now, our GameCore.init() function looks like this:

GameCore.prototype.init = function() {
  if (this.primaryCanvas.getContext) {
    this.primaryContext = this.primaryCanvas.getContext("2d");

  if (this.bufferCanvas.getContext) {
    this.bufferContext = this.bufferCanvas.getContext("2d");


We simply set up the references to the context of our primary and buffer canvases here. We use the variables in which we stored each canvas's width and height, to avoid any unnecessary overhead from calling the canvas properties directly. Once this has been done, the GameCore.loadData() function is called. This function is defined as such:

GameCore.prototype.loadData = function() {
  var testImage = new Image();
  testImage.src = "sprite.png";


For the purpose of our demonstration, we define the loadData() function to create a new Image object, set it to point to an image named "sprite.png", and then add the image to our GameCore.images array. Once that is done, we are ready to start the game. Our GameCore.draw() function looks like this:

GameCore.prototype.draw = function() {

  for (currImage = 0; currImage < this.images.length; currImage++) {
    if (this.images[currImage].complete) {
                                   this.bufferCanvasWidth / 2,
                                   this.bufferCanvasHeight / 2);



  setTimeout("game.draw()", this.frameSpeed);

This function begins by filling the backbuffer with a solid black rectangle (remember, the default fillStyle is black). We then iterate through all the images in our demo (currently, just one) to make sure they have completely loaded. If the image is finished loading, we draw the image to the center of our backbuffer using the context.drawImage(...) function. Here, we put our image as the first argument, and the X/Y coordinate at which to place the image.

Once we have iterated through all our images, we clear the primary context, and then "flip" our backbuffer onto it using the drawImage() function. This time, we call the drawImage() function with five arguments: The image, the X/Y coordinates, and the width/height of the image to draw. We then set a timeout for redrawing the canvas.

To test this out, we will need our "sprite.png" image file in our game directory. If you don't feel like making your own, here is the one I used in writing this article:

When you open the page, you should see your sprite image appear near the center of the canvas. It is off-center a bit, because the image is drawn starting from the top-left. In other words, the top-left of the image is in the center of the canvas!

Later, we will give suggestions for easily drawing images from their center, rather than their top-left corners. For now, though, let's go into detail on the drawImage() function.

The drawImage() function template looks like this:

drawImage(Object image, float dx, float dy, float dw, float dh)

Where image can be an HTML image, HTML canvas, or HTML video element. dx/dy indicate the origin of drawing (i.e. the top-left corner of where to draw). dw/dh are optional arguments, and indicate how much of the image to draw. In other words, if our image is 300x300, and only want to draw half of that image, we would call drawImage() as such:

[context].drawImage(myImage, 0, 0, 150, 150);

If dw/dh are not defined, the entire image will be drawn.

There is also an alternate way to use drawImage() that allows you to draw a specific portion of an image to a context. We will go over this method later in this series.

The next time we dive into Project "HoT MetaL JaZz", we will go over some of the advanced path functions built into the HTML5 canvas element, and also learn how to dynamically scale our game to fit the player's browser window. It will be an extremely exciting ride, so don't miss it!

Starting out with 3D #1 (A - E)

With each technological advancement a few new words are invented. For example a Hard drive, an Operating System, Application software, PDF, etc.  All those were words that you wouldn’t find anywhere a hundred years ago. 3D is no exception. There are hundreds of new words and terms you should learn before you can think of going into 3D.

First off, you should know that 3D is a space that has height, width and depth. 2D only has height and width.

Note: Not only should you know 3D terms before you can tackle 3D, make sure you know your 2D terms to! Such as channel, animation, co-ordinate system etc.

Due to the massive amount of words that was invented for 3D, this topic was split into several parts, I also only included the important ones, things you will be using a lot, or things I think will be much more popular in the future. At the end of the series, you can find a downloadable PDF file with all the words neatly summed up.

#1 (A – E)
#2 (F – L)
#3 (M – P)
#4 (Q – T)
#5 (U – Z)

Much like a human bone organ, a bone in a 3D scene is placed “inside” of the skeleton of the model. When the bone in the skeleton is moved, it reacts on the mesh, making the mesh move, thus deforming the model to fit into the skeleton.

In 3D, a Boolean is an object that is created by merging two new objects via mathematical operators. The two objects may be subtracted, merged or intersected to form a new object. See screenshot below of me using the Boolean technique on two cubes.

Bounding Box
The smallest possible regular shaped block that encloses an object.

Why would this be useful? This really does speed up a lot of maths, for example with collision detection. Instead of checking collision with hundreds of points on the model, a simple box is created.

This is also a rendering technique (See DirectX article), in which programmers check if the bounding box has any effect on the final image. If not, the object is not rendered. This saves your computer a lot of trouble and speeds up gameplay.

Bump Map
Bump mapping is a technique used to modify a 3D object based on a 2D, black and white image. When projected on the surface of a model, parts of the surface which has lighter, or white colors are raised, darker colors are lowered. This is simply a rendering effect and can make a big difference on the end effect. The actual model is not actually edited. (See Displacement map).

A clean sphere on the left, with a bump map in the middle, placed over the sphere will generate the result on the right.

A virtual viewpoint in 3D space. A camera represents the viewers eye. Everything you see on the screen of a game, is making use of a camera. A camera can move, rotate, zoom, etc. Whatever to suit the programmers need.

Camera Mapping

A technique by which geometry matching the size and perspective of objects shown within a still image is constructed, and the original image mapped back onto those objects. This permits limited camera movement around the picture, giving the illusion of a 3D environment from a 2D image.

Camera Path
The given path in 3D space in which the camera will move in a 3D scene. For example, when starting a game, some games give you a quick preview of the level. That preview you see is the image from a camera which is moving in a path.

Camera Tracking
A visual-effects technique that allows the insertion of computer graphics into other footage with correct position, scale, orientation, and motion relative to the photographed objects in the shot.

A piece of intense illumination caused by the refraction of light through a transparent object or the
reflection of light from a reflective surface. This is not easy to achieve, in mathematics, this effect would have a formula similar to this:
Luckily none of us have to worry about that. The 3D package worries about that. It does all you need. Only recently have computers be able to create this effect in a 3D environment. A common example is that reflective shine on a swimming pool on a hot sunny day.

CGI stands for computer generated imagery. This is really simple. An image created or manipulated with the help of a computer. This is used in a lot of forms, but mostly in 3D. Probably because 3D is created from scratch and not like photographs being manipulated.

Character Animation
A sub section of animation that deals with the simulation of movements of living creatures. Usually before this can be done the model has to be rigged. (See part 4).

CV stands for control vertex(See part 5). Which is a control point used to manipulate a NURBS(See part 3) curve.

Displacement Map
Similar to a Bump map, but a Displacement map modifies the actual underlining geometry and is not just a rendering effect.

Stands for Depth of Field. The depth of field of a specific lens is the range of acceptable focus in front of and behind the primary focus setting. It is a function not only of the specific lens used but
also of the distance from the lens to the primary focal plane, and of the chosen aperture. Larger apertures will narrow the depth of field, smaller apertures will increase it.

A modeling technique in which a 2D outline is duplicated outwards along a linear path. Example of extruding a flat plane.

This concludes the first section of our 3D terminology. Stay tuned, number 2 is coming soon!


Tech Roundup: Digital Audio

Are you a musician who doesn't know much about technology? Or perhaps a techie who simply needs some information on available technologies involving music production and sound? If so, then this article will give you a break-down of the major technologies involved in computer audio.

Let us start with the big picture. A DAW (Digital Audio Workstation) is the heart and soul of the digital studio. They typically process, record, and mix sound; as well as give you the ability to use different plug-ins. Examples of DAW software include:

Steinberg Cubase - http://www.steinberg.net/
Propellerhead Record Reason Duo - http://www.propellerheads.se/
Energy-XT -  http://www.energy-xt.com/
Cakewalk SONAR - http://www.cakewalk.com/products/SONAR/
Ableton Live - http://www.ableton.com/
Pro Tools - http://www.avid.com/US/resources/digi-orientation/


A common format used in composing music, MIDI (Musical Instrument Data Interface) actually doesn't contain any sound data. Instead, it uses instructions to interact with devices or software. MIDI is a standard "language" used by software and hardware to talk and interact with one another. For example, a MIDI file on your computer sends instructions to your MIDI player software to tell it which notes to hit, when to release notes, how much velocity to use when hitting a note, and so on.


Loops are short "chunks" of music - such as a drum pattern - that can be used to form a baseline for a song. These can come in multiple varieties: Some are divided into separate beats and can be quantized and matched to a beat; while others are simply a single sound file. Common loop formats include:

WAV - A standard, uncompressed sound file.
Acidized WAV - Similar to a wav, but "sliced" into segments that can be adjusted for tempo.
REX Loop - One of the most common loop formats. This format was engineered by the creators of Propellerhead Reason and Record.
Apple Loop - A loop format used in Apple software, such as Garageband and Logic.


Samples are small sound files that represent a note played from an instrument or other source. These sounds can be used individually, but are often packaged into sample "libraries" that include many related samples. Some of these libraries can be used in conjunction with MIDI data to change the sound. The most common of these is SoundFont; a format created by E-mu Systems, Inc. that includes sampled sounds arranged in categories called "banks". SoundFonts are commonly used to render "enhanced" MIDI music.


Plug-ins include virtual instruments and effects that "plug in" to your DAW to enhance and shape your sound. These can be anything from reverbs, distortion effects, overdrive effects, and vocoders. They can also be instruments such as synthesizers and sample-based sound engines. Plug-in standards include:

VST - Designed by Steinberg, VSTs are programs that are loaded into a DAW as effects processors and the like.
VSTi - Similar to VST, but generally contain sampled sounds from one or more musical instruments.
RTAS - This format was developed by DigiDesign and Avid.
AU - Apple's own plug-in standard.

Putting it all together

It can be daunting putting together the right combination of software for music production. After all, not all DAW software will use all plug-in formats, loop formats, or even sample formats. It is important to keep in mind a few things as you shop for digital audio tools:

Compatibility. Do the formats supported by your DAW match the formats of the loops, samples, and plug-ins you want to use?
Copy-Protection. Do you have to buy any extra copy-protection hardware, such as a dongle?
Performance. Will your computer be able to handle the overhead of the software and all effects/instruments you plan on using?

If you are seriously considering moving over to the digital music standard, it will pay off to be patient, do your research, and learn the ins and outs of each piece of software in which you are interested. It may take some time to settle on something that fits your needs - and budget - but knowing what you're looking for is the first step in making the leap.

Project "HoT MetaL JaZz" - Part 1

In the introduction to this series, we went over how to get started programming games with HTML5 and JavaScript. If you haven't seen that article yet, I suggest you do so before proceeding further. The code in the previous article will be the basis upon which the remaining articles rely.

So, we should now have a blank HTML5 canvas element on our Web page. It seems a shame to let it just sit there, sight-unseen, so let's learn how to use JavaScript to interact with and manipulate the canvas element.

Go to the script portion of your source (i.e. the lines between the <script> and </script> tags). Insert the following lines of code above the declaration for the startGame() function:

var primaryCanvas;
var primaryContext;
var bufferCanvas;
var bufferContext;
var animation;

We will use the first four variables to reference our drawing surfaces - the "primary" variables show our graphics, while the "buffer" variables store data that is put together in preparation for drawing. The HTML5 Canvas element has a Context property, which is what we will primarily deal with when writing games. The fifth variable will simply be used to animate the context later.

Now, after the variable definitions, go into the body of the function "startGame()", and enter the following code:

primaryCanvas = document.getElementById("gamecanvas");
if (primaryCanvas.getContext) {
  primaryContext = primaryCanvas.getContext("2d");
  primaryContext.clearRect(0, 0, 300, 300);

bufferCanvas = document.getElementById("backbuffer");
if (bufferCanvas.getContext) {
  bufferContext = bufferCanvas.getContext("2d");
  bufferContext.clearRect(0, 0, 300, 300);

So, what did we just do here? First, we set our primaryCanvas variable to a reference to the actual HTML5 canvas element in our page, "gamecanvas". Then we did a check to make sure the primaryCanvas variable has a context. If so, we get the "2d" context, and set it into our primaryContext variable. We finish up by clearing a rectangular area of the context, starting from 0/0 (top-left) to 300/300 (the size of the canvas). We repeat the same process for the backbuffer.

The "2d" context of a canvas gives us access to all 2D functions and parameters that apply to drawing. In the future, when WebGL (and possibly other technologies) become more prevalent, there may be additional contexts. For now, though, we want the "2d" context.

Notice that rather than call the width and height properties of the primaryCanvas and bufferCanvas, I put literal values (i.e. 300) for the width and height of the clearRect function. Calling canvas elements directly is expensive, in that it has a fair bit of overhead involved. Thus, I simply use literal values instead.

Now that we have our drawing contexts, we're ready to make things happen.

After the code for grabbing our contexts, enter the following new function:

function draw() {
  bufferContext.fillStyle = "rgb(0, 0, 0)";
  bufferContext.fillRect(0, 0, 300, 300);
  bufferContext.fillStyle = "rgb(255, 255, 255)";
  bufferContext.font = "24px sans-serif";
  bufferContext.textAlign = "center";
  bufferContext.fillText("Oh, HI!", 150, 150);

  primaryContext.drawImage(bufferCanvas, 0, 0, 300, 300);

Finally, save your html file, and open it in your Web browser. If all went well, you should be greeted by a friendly message surrounded by a black background.

Here's a review of the above code:

The [context].fillStyle property tells the context which color - in RGB format - to use for fills. This value will persist until it is given a new value later on in the code. As for [context].fillRect, that simply takes the fillStyle (which defaults to RGB 0,0,0 - black) and uses that color to draw a filled rectangle, given an X/Y origin, and a width/height.

After drawing the black background in our context, we set the fillStyle to white (RGB 255,255,255), and use [context].font to adjust the font. The font can be any CSS-compliant font style. Then, we set [context].textAlign to center. This makes the text centered on the point used to draw it. Finally, we draw our text with [context].fillText, giving it the string, X-coordinate, and Y-coordinate to start from.

The context of an HTML5 canvas has a plethora of drawing functions beyond the code we just plugged in. The context has full support for translation, rotation, scaling, and has a transform method. To add a little spice to our project, let's make the text spin around like a propeller!

Go back to your code, and enter the following line at the end of your startGame() function:

animation = setInterval("draw()", 50);

And then add the following into your draw() function, just before the primaryContext.drawImage(...) call:

bufferContext.translate(150, 150);
bufferContext.translate(-150, -150);

Save your code, and open it in your browser. Voila - the text spins around like a propeller, centered on the canvas.

Of course, you may be wondering about a few things. First, the [context].translate() calls. Why do we need those? The answer is simple - because the center point of rotations is always oriented to the top-left origin of what you are rotating. We change this by translating the context - and thus the center point - to the center of the screen (X=150, Y=150). Then we apply our rotation, and clean up our translation "mess" by subtracting the translation.

You may also find it odd that the letters "jitter" a bit. This is perfectly normal - text drawing along an angle isn't always precise. Of course, there are ways around this problem (such as drawing the text to the backbuffer first, and then rotating the backbuffer itself before copying it to the primary context).

We've only scratched the surface of what the HTML5 canvas is capable of. In the next article, we will cover drawing lines and shapes, as well as drawing images to the context. See you there!

Those darn cameras! Camera 1 Part 1

I know I am not the only one that feels this way, but cameras are a huge pain! I have seen so many people quitting projects just because they couldn’t get those darn camera’s right! It’s fine in 2D games, not much math just follow the dude. 3D is a completely different story!

I have decided to write a couple of articles dedicated to cameras! The cameras I will be detailing:
First Person
Third Person
Strategy type (If I have the time).

Incidentally, my current game uses a third person camera, so that is what I will start with!

A third person camera is, in my opinion, the best sort of camera, and is also the most popular. The problem is just the math involved.

Co-ordinate system
Typically games use the Cartesian style for any sort of programming. Games are no different, the only difference is 3D games use a 3D Cartesian co-ordinate system.

If you have no idea what the Cartesian co-ordinate system is, I would recommend you stop reading right now and read up on the Cartesian co-ordinate system!

So next up, we have the polar co-ordinate system.
The Cartesian system is known for it’s simplicity of moving objects. The polar co-ordinates is known for its simplicity in rotations. It’s position is defined by theta and r, as seen below.

Thanks to trigonometry primitives, we can convert the polar system to the Cartesian system via these formulas:
Once can also convert the Cartesian system to the polar system via these formulas:

A further advancement of the polar co-ordinate system we use is the spherical system.
The spherical system of course has 3 arguments. (r, a, a2). Where the 2 a’s are angles. This is basically how it looks:

Now let me be the first on this blog to say, that the spherical co-ordinate system is the most logic choice for a 3D camera because of it’s rotation in a 3D space.

And yes, lucky for you, one can convert the spherical system to a 3D Cartesian system.

Or as others prefer:

You can also convert from the Cartesian system to the spherical system like so:

Why did I teach you this?
In order to understand the math used, you have to understand what system is used. You have to have some basic knowledge on the topic otherwise you wouldn’t understand anything, that would be a waste.

I know what some of you are thinking. “I get the concept, now why on earth would I need to know this junk?”.

Simple really, I assume you are familiar with the Cartesian system which moves a 3D object. The spherical system is used on a 3D camera, for obvious reasons.

See that little dot in the centre? Pretend that that is the character, observe how the camera can easily travel around the character without that much math.

Camera target would be the character’s X,Y and Z co-ordinates.
The position of the camera would be something as explained above.
There you have it!
In the next lesson I will go around changing it’s height and it’s rotation etc.

Or if you want, you can use the current formula I am using in my project, which is a little more complex, more on this later:
X = -1*cos(-1*YRotation*PI/180) + XPosition
Y = YPosition
Z = -1*sin(-z*YRotation*PI/180) + ZPosition

Note: YRotation and XPosition, YPosition, ZPosition, refers to the character, not the camera itself.

In part 2 we will be looking at a downloadable example, as well as manipulating the camera such as Zoom, height, rotation etc. I shall also introduce the spring system. In part 3 we will be looking at other considerations, such as boundaries. I hope you learnt something from this article, thank you so much for reading.

Mathematics for GameDevelopers
Programming Gems 4
Math teacher :P