I renamed my iPhone / iPod touch engine to Oolong Engine and moved it to a new home. Its URL is now
www.oolongengine.com
I will add now a 3rd person camera model. This camera will be driven by the accelerometer and the touch screen.
Sunday, December 30, 2007
Wednesday, December 26, 2007
Animating Normal (Maps)
There seems to be an on-going confusion on how to animate normal maps. The best answer to this is: you don't :-).
The obvious problem is to stream in two normal maps and then to modulate two normals. If you are on a console platform you just don't want to do this. So what would be a good way to animate a normal? You modulate height fields. Where both height fields have peaks, the result should also have a peak. Where one of the height fields is zero, the result should be also be zero, independent of the other height field.
Usually, a normal map is formed by computing a relief of a height field (bump map) over a flat surface. If is the bump map’s height at the texture coordinates, the standard definition of the normal map is
The obvious problem is to stream in two normal maps and then to modulate two normals. If you are on a console platform you just don't want to do this. So what would be a good way to animate a normal? You modulate height fields. Where both height fields have peaks, the result should also have a peak. Where one of the height fields is zero, the result should be also be zero, independent of the other height field.
Usually, a normal map is formed by computing a relief of a height field (bump map) over a flat surface. If is the bump map’s height at the texture coordinates, the standard definition of the normal map is
The height fields are multiplied to form a combined height field like this
To determine the normal vector of this height field according to the first Equation, one needs the partial derivatives of this functions. This is a simple application of the product rule:
And similarly for the partial derivative with respect to v. Thus:
BTW: to recover the height field’s partial derivatives from the normal map we can use:
About Raytracing
My friend Dean Calver published an article about raytracing that is full of wisdom. The title says it all Real-Time Ray Tracing: Holy Grail or Fool's Errand?. This is straight to the point :-)
LogLuv HDR implementation in Heavenly Sword
Heavenly Sword stores HDR data in 8:8:8:8 render targets. I talked to Marco about this before and saw a nice description in Christer Ericson's blog here
I came up with a similar idea that should be faster and bit more hardware friendly with a new compression format that I call L16uv. The name more or less says it all :-)
I came up with a similar idea that should be faster and bit more hardware friendly with a new compression format that I call L16uv. The name more or less says it all :-)
Normal Map Data II
Here is one interesting normal data idea I missed. It is taken from Christer Ericson in his blog:
One clever thing they do (as mentioned on these two slides) is to encode their normal maps so that you can feed either a DXT1 or DXT5 encoded normal map to a shader, and the shader doesn’t have to know. This is neat because it cuts down on shader permutations for very little shader cost. Their trick is for DXT1 to encode X in R and Y in G, with alpha set to 1. For DXT5 they encode X in alpha, Y in G, and set R to 1. Then in the shader, regardless of texture encoding format, they reconstruct the normal as X = R * alpha, Y = G, Z = Sqrt(1 - X^2 - Y^2).
A DXT5-encoded normal map has much better quality than a DXT1-encoded one, because the alpha component of DXT5 is 8 bits whereas the red component of DXT1 is just 5 bits, but more so because the alpha component of a DXT5 texture is compressed independently from the RGB components (the three of which are compressed dependently for both DXT1 and DXT5) so with DXT5 we avoid co-compression artifacts. Of course, the cost is that the DXT5 texture takes twice the memory of a DXT1 texture (plus, on the PS3, DXT1 has some other benefits over DXT5 that I don’t think I can talk about).
Tuesday, December 25, 2007
Normal Data
Normal data is one of the more expensive assets of games. Creating normal data in Z Brush or mudbox can easily make up for a few million dollars.
Storing normal data in textures in a way that preserves the original data with the lowest error level is an art form that needs special attention.
I am now aware of three ways to destroy normal data by storing it in a texture:
1. Store the normal in a DXT1 compressed texture
2. Store the normal in a DXT5 compressed texture by storing the x value in alpha and the y value in the green channel .... and by storing some other color data in the red and blue channel.
3. Store the normal in its original form -as a height map- in one color channel of a DXT1 compressed texture with two other color channels.
They all have a common denominator: the DXT format was created to compress color data so that the resulting color is still perceived as similar. Perceiving 16 vectors as similar follows different rules than perceiving 16 colors as similar. Therefore the best -so far- solutions to store normals is to
- not compress them at all
- store y in the green channel of a DXT5 compressed texture and red in the alpha channel and color the two empty channels black
- use the DXN format that consists of two DXT5 compressed alpha channels
- store a height map in an alpha channel of a DXT5 compressed texture and generate the normal out of the height map.
The DXT5 solutions and the DXN solution occupy 8-bit per normal. The height map solution occupies 4-bit per normal. It is probably not as good looking as the 8-bit per normal solutions.
There are lots of interesting areas regarding normals other than how they are stored. There are challenges when you want to scale, add, modulate, deform, blend or filter them. Then there is also anti-aliasing ... :-) ... food for thought.
Storing normal data in textures in a way that preserves the original data with the lowest error level is an art form that needs special attention.
I am now aware of three ways to destroy normal data by storing it in a texture:
1. Store the normal in a DXT1 compressed texture
2. Store the normal in a DXT5 compressed texture by storing the x value in alpha and the y value in the green channel .... and by storing some other color data in the red and blue channel.
3. Store the normal in its original form -as a height map- in one color channel of a DXT1 compressed texture with two other color channels.
They all have a common denominator: the DXT format was created to compress color data so that the resulting color is still perceived as similar. Perceiving 16 vectors as similar follows different rules than perceiving 16 colors as similar. Therefore the best -so far- solutions to store normals is to
- not compress them at all
- store y in the green channel of a DXT5 compressed texture and red in the alpha channel and color the two empty channels black
- use the DXN format that consists of two DXT5 compressed alpha channels
- store a height map in an alpha channel of a DXT5 compressed texture and generate the normal out of the height map.
The DXT5 solutions and the DXN solution occupy 8-bit per normal. The height map solution occupies 4-bit per normal. It is probably not as good looking as the 8-bit per normal solutions.
There are lots of interesting areas regarding normals other than how they are stored. There are challenges when you want to scale, add, modulate, deform, blend or filter them. Then there is also anti-aliasing ... :-) ... food for thought.
Wednesday, December 19, 2007
Renderer Design
Renderer design is an interesting area. I recommend starting with the lighting equation, splitting it up in a material and light part and then move on from there. You can then think about what data you need to do a huge number of direct lights and shadows (shadows are harder than lights) and how you do all the global illumination part. Especially the integration of global illumination and many lights should get you thinking for a while.
Here is an example:
1. render shadow data from the cascaded shadow maps into a shadow collector that collects indoor, outdoor, cloud shadow data
2. render at the same time world-space normals in the other three channels of the render target
3. render all lights into a light buffer (only the light source properties not the material properties)
4. render all colors and the material properties into a render target while applying all the lights from the light buffer (here you stitch together the Blinn-Phong lighting model or whatever you use)
5. do global illumination with the normal map
6. do PostFX
Here is an example:
1. render shadow data from the cascaded shadow maps into a shadow collector that collects indoor, outdoor, cloud shadow data
2. render at the same time world-space normals in the other three channels of the render target
3. render all lights into a light buffer (only the light source properties not the material properties)
4. render all colors and the material properties into a render target while applying all the lights from the light buffer (here you stitch together the Blinn-Phong lighting model or whatever you use)
5. do global illumination with the normal map
6. do PostFX
Tuesday, December 18, 2007
How to design a Material System
Here is my take on how a good meta-material / material systems should be written and how the art workflow works:
I just came across the latest screenshots of Mass Effect that will ship soon:
http://masseffect.bioware.com/gallery/
The thing I found remarkable is the material / meta-material system. Skin looks like skin and then cloth has a completely different look and leather looks really like leather. Combined with great normal maps Mass Effects looks like the game with the best looking characters up-to-date.
The best way to a material / meta-material system is very close to what was done in Table-Tennis. I described the system in ShaderX4 some time before I joined Rockstar.
The idea is to dedicate to each material a specific *.fx file and also name the file accordingly. So you end up with eye.fx, skin.fx, metal.fx, brushed_metal.fx, leather.fx, water.fx etc. ... so you will want to end up with probably 15 - 20 *.fx files (they would holds different techniques for shadowed / unshadowed etc.). This low number of files makes shader switching a non-issue and also allows to sort objects according to their shaders, if this is a bottleneck.
The *.fx files can be called meta-material (this naming convention was inspired by a similar system that was used in the Medal of Honor Games, as long as they were based on the Quake 3 engine).
The material files hold the data while each of the meta-material files only hold code. So we differ between code (*.fx file) and data (*.mtl or something different). One of the things that can be done to reduce the data updates is to shadow the constant data and only update the data that changes. In pseudo-code this would be:
- draw eyes from first character in the game
- apply the eye.fx file
- load the data for this file from the material file
- draw the eyes from the second character in the game
- eye.fx is still applied, so we do not have to do it again ... shader code just stays in the Graphics card and does not need to be reloaded
- load the data file for this pair of eyes and load only data into the graphics card that has changed by comparing the shadowed copy of the data from the previous draw call with the one that needs to be loaded now
- move on like this for all eyes
- then move on with hair ...
The underlying idea of the data shadowing and the sorting by shaders is, to reduce the amount of changes that need to be done if you want to apply very detailed and very different surfaces to a character. In other words if you setup a game with such a meta-material / material system there is a high chance that you can find a performance difference between sorting for shaders or sorting for any other criteria.
For me the key elements of having a truly next-gen looking shader architecture for a next-gen game are:
- having a low number of *.fx files to reduce shader switching + good sorting of objects to reduce shader switching
- letting the artist apply those *.fx files and change the settings, switching on and off features etc.
- tailoring the low number of shaders very much to specific materials, so that they look very good (the opposite approach was taken in DOOM III were everything consists of metal and plastic and therefore the same lighting/shader model is applied to everything)
The low number of *.fx files and the intelligence we put into the creation of the effect files should give us a well performing and good looking game and we are able to control this process and predict performance.
I just came across the latest screenshots of Mass Effect that will ship soon:
http://masseffect.bioware.com/gallery/
The thing I found remarkable is the material / meta-material system. Skin looks like skin and then cloth has a completely different look and leather looks really like leather. Combined with great normal maps Mass Effects looks like the game with the best looking characters up-to-date.
The best way to a material / meta-material system is very close to what was done in Table-Tennis. I described the system in ShaderX4 some time before I joined Rockstar.
The idea is to dedicate to each material a specific *.fx file and also name the file accordingly. So you end up with eye.fx, skin.fx, metal.fx, brushed_metal.fx, leather.fx, water.fx etc. ... so you will want to end up with probably 15 - 20 *.fx files (they would holds different techniques for shadowed / unshadowed etc.). This low number of files makes shader switching a non-issue and also allows to sort objects according to their shaders, if this is a bottleneck.
The *.fx files can be called meta-material (this naming convention was inspired by a similar system that was used in the Medal of Honor Games, as long as they were based on the Quake 3 engine).
The material files hold the data while each of the meta-material files only hold code. So we differ between code (*.fx file) and data (*.mtl or something different). One of the things that can be done to reduce the data updates is to shadow the constant data and only update the data that changes. In pseudo-code this would be:
- draw eyes from first character in the game
- apply the eye.fx file
- load the data for this file from the material file
- draw the eyes from the second character in the game
- eye.fx is still applied, so we do not have to do it again ... shader code just stays in the Graphics card and does not need to be reloaded
- load the data file for this pair of eyes and load only data into the graphics card that has changed by comparing the shadowed copy of the data from the previous draw call with the one that needs to be loaded now
- move on like this for all eyes
- then move on with hair ...
The underlying idea of the data shadowing and the sorting by shaders is, to reduce the amount of changes that need to be done if you want to apply very detailed and very different surfaces to a character. In other words if you setup a game with such a meta-material / material system there is a high chance that you can find a performance difference between sorting for shaders or sorting for any other criteria.
For me the key elements of having a truly next-gen looking shader architecture for a next-gen game are:
- having a low number of *.fx files to reduce shader switching + good sorting of objects to reduce shader switching
- letting the artist apply those *.fx files and change the settings, switching on and off features etc.
- tailoring the low number of shaders very much to specific materials, so that they look very good (the opposite approach was taken in DOOM III were everything consists of metal and plastic and therefore the same lighting/shader model is applied to everything)
The low number of *.fx files and the intelligence we put into the creation of the effect files should give us a well performing and good looking game and we are able to control this process and predict performance.
Sunday, December 9, 2007
Porting an Open-Source Engine to the iPhone?
Evaluated several open source engines:
- Ogre: the architecture and design is not very performance friendly. The usage of C++ makes the usage and re-design here quite difficult. An example: each material has its own C++ file and there is an inheritance chain from a base class ...
- Irrlicht: the Mac OS X version I tried looks like a Quake 3 engine. It also seems to lack lots of design elements of a modern 3D engine. Other than this it looks quite good for a portable device. You might also use the original Quake 3 engine then ...
- Quake 3: this is obviously a very efficient game engine with rock-solid tools, I worked with this engine in the Medal of Honor series before, but I wanted a bit more flexibility and I wanted to target more advanced hardware.
- Crystal Space: why is everything a plug-in? Can't get my head around this.
- C4: this is one of my favourite engines, but it is closed source :-(
Thursday, December 6, 2007
iPhone Graphics Programming
I am lately into iPhone graphics programming. The interesting part is that the iPhone uses a PowerVR chip similar to the Dreamcast. Most of the user interface elements want to be addressed via Objective-C but then you can go with C/C++ from there. So my small little engine runs C++ with a nice memory manager and a logging system.