Two days ago I commited a major Oolong update. Please check out the Oolong Engine blog at
http://www.oolongengine.com
I updated the memory manager, the math library, upgraded to the latest POWERVR POD format and added to each example VBO support. Please also note that in previous updates a new memory manager was added, the VFP math library was added and a bunch of smaller changes were done as well.
The things on my list are: looking into the sound manager ... it seems like the current version allocates memory in the frame and adding the DOOM III level format as a game format. Obviously zip support would be nice as well ... let's see how far I get.
Sunday, December 28, 2008
Thursday, December 25, 2008
Programming Vertex, Geometry and Pixel Shaders
A christmas present: we just went public with "Programming Vertex, Geometry and Pixel Shaders". I am a co-author of this book and we published it free on www.gamedev.net at
http://wiki.gamedev.net/index.php/D3DBook:Book_Cover
If you have any suggestions, comments or additions to this book, please give me a sign or write it into the book comment pages.
http://wiki.gamedev.net/index.php/D3DBook:Book_Cover
If you have any suggestions, comments or additions to this book, please give me a sign or write it into the book comment pages.
Wednesday, December 24, 2008
Good Middleware
Kyle Wilson wrote up a summary about how good middleware should be:
An interesting read.
Tuesday, December 23, 2008
Quake III Arena for the iPhone
Just realized that one of the projects I contributed some code to went public in the meantime. You can get the source code at
http://code.google.com/p/quake3-iphone/
There is a list of issues. If you have more spare time than me, maybe you can help out.
http://code.google.com/p/quake3-iphone/
There is a list of issues. If you have more spare time than me, maybe you can help out.
iP* programming tip #8
This is the christmas issue of the iPhone / iPod touch programming tips. This time we deal with the touch interface. The main challenge I found with the touch screen support is that it is hard to use it to track for example forward / backward / left / right and fire at the same time. Let's say the user presses fire and then he presses forward, what happens when he accidentally slides his finger a bit?
The problem is that each event is defined by the region it happens on the screen. When the user slides his finger, he is leaving this region. In other words if you handle on-screen touches as touch is on and finger lifted is off, if the finger is moved away and then lifted, the event is still on.
The work around is that if the user slides away with his finger the previous location of this finger is used to check if the current location is in the even region. If it is not, it defaults to switch off.
Touch-screen support for a typical shooter might work like this:
In touchesBegan, touchesMoved and touchesEnd there is a function call like this:
// Enumerates through all touch objects
for (UITouch *touch in touches)
{
[self _handleTouch:touch];
touchCount++;
}
_handleTouch might look like this:
- (void)_handleTouch:(UITouch *)touch
{
CGPoint location = [touch locationInView:self];
CGPoint previousLocation;
// if we are in a touchMoved phase use the previous location but then check if the current
// location is still in there
if (touch.phase == UITouchPhaseMoved)
previousLocation = [touch previousLocationInView:self];
else
previousLocation = location;
...
// fire event
// lower right corner .. box is 40 x 40
if (EVENTREGIONFIRE(previousLocation))
{
if (touch.phase == UITouchPhaseBegan)
{
// only trigger once
if (_bitMask ^ Q3Event_Fire)
{
[self _queueEventWithType:Q3Event_Fire value1:K_MOUSE1 value2:1];
_bitMask|= Q3Event_Fire;
}
}
else if (touch.phase == UITouchPhaseEnded)
{
if (_bitMask & Q3Event_Fire)
{
[self _queueEventWithType:Q3Event_Fire value1:K_MOUSE1 value2:0];
_bitMask^= Q3Event_Fire;
}
}
else if (touch.phase == UITouchPhaseMoved)
{
if (!(EVENTREGIONFIRE(location)))
{
if (_bitMask & Q3Event_Fire)
{
[self _queueEventWithType:Q3Event_Fire value1:K_MOUSE1 value2:0];
_bitMask^= Q3Event_Fire;
}
}
}
}
...
Tracking if the switch is on or off can be done with a bit mask. The event is send off to the game with a separate _queueEventWithType method.
The problem is that each event is defined by the region it happens on the screen. When the user slides his finger, he is leaving this region. In other words if you handle on-screen touches as touch is on and finger lifted is off, if the finger is moved away and then lifted, the event is still on.
The work around is that if the user slides away with his finger the previous location of this finger is used to check if the current location is in the even region. If it is not, it defaults to switch off.
Touch-screen support for a typical shooter might work like this:
In touchesBegan, touchesMoved and touchesEnd there is a function call like this:
// Enumerates through all touch objects
for (UITouch *touch in touches)
{
[self _handleTouch:touch];
touchCount++;
}
_handleTouch might look like this:
- (void)_handleTouch:(UITouch *)touch
{
CGPoint location = [touch locationInView:self];
CGPoint previousLocation;
// if we are in a touchMoved phase use the previous location but then check if the current
// location is still in there
if (touch.phase == UITouchPhaseMoved)
previousLocation = [touch previousLocationInView:self];
else
previousLocation = location;
...
// fire event
// lower right corner .. box is 40 x 40
if (EVENTREGIONFIRE(previousLocation))
{
if (touch.phase == UITouchPhaseBegan)
{
// only trigger once
if (_bitMask ^ Q3Event_Fire)
{
[self _queueEventWithType:Q3Event_Fire value1:K_MOUSE1 value2:1];
_bitMask|= Q3Event_Fire;
}
}
else if (touch.phase == UITouchPhaseEnded)
{
if (_bitMask & Q3Event_Fire)
{
[self _queueEventWithType:Q3Event_Fire value1:K_MOUSE1 value2:0];
_bitMask^= Q3Event_Fire;
}
}
else if (touch.phase == UITouchPhaseMoved)
{
if (!(EVENTREGIONFIRE(location)))
{
if (_bitMask & Q3Event_Fire)
{
[self _queueEventWithType:Q3Event_Fire value1:K_MOUSE1 value2:0];
_bitMask^= Q3Event_Fire;
}
}
}
}
...
Tracking if the switch is on or off can be done with a bit mask. The event is send off to the game with a separate _queueEventWithType method.
Sunday, December 14, 2008
iP* programming tip #7
This time I will cover Point Sprites in the iPhone / iPod touch programming tip. The idea is that a set of points -as the simplest primitive in OpenGL ES rendering- describes the positions of Point Sprites, and their appearance comes from the current texture map. This way, Point Sprites are screen-aligned sprites that offer a reduced geometry footprint and transform cost because they are represented by one point == vertex. This is useful for particle systems, lens flare, light glow and other 2-D effects.
user_clamp represents GL_POINT_SIZE_MIN and GL_POINT_SIZE_MIN settings of the glPointParametervf(). impl_clamp represents an implementation-dependent point size range.
GL_POINT_DISTANCE_ATTENUATION is used to pass in params as an array containing the distance attenuation coefficients a, b, and c, in that order.
In case multisampling is used (not officially supported), the point size is clamped to have a minimum threshold, and the alpha value of the point is modulated by the following equation:
GL_POINT_FADE_THRESHOLD_SIZE specifies the point alpha fade threshold.
Check out the Oolong engine example Particle System for an implementation. It uses 600 point sprites with nearly 60 fps. Increasing the number of point sprites to 3000 lets the framerate drop to around 20 fps.
- glEnable(GL_POINT_SPRITES_OES) - this is the global switch that turns point sprites on. Once enabled, all points will be drawn as point sprites.
- glTexEnvi(GL_POINT_SPRITES_OES, GL_COORD_REPLACE_OES, GL_TRUE) - this enables [0..1] texture coordinate generation for the four corners of the point sprite. It can be set per-texture unit. If disabled, all corners of the quad have the same texture coordinate.
- glPointParametervf(GLenum pname, const GLfloat * params) - this is used to set the point attenuation as described below.
user_clamp represents GL_POINT_SIZE_MIN and GL_POINT_SIZE_MIN settings of the glPointParametervf(). impl_clamp represents an implementation-dependent point size range.
GL_POINT_DISTANCE_ATTENUATION is used to pass in params as an array containing the distance attenuation coefficients a, b, and c, in that order.
In case multisampling is used (not officially supported), the point size is clamped to have a minimum threshold, and the alpha value of the point is modulated by the following equation:
GL_POINT_FADE_THRESHOLD_SIZE specifies the point alpha fade threshold.
Check out the Oolong engine example Particle System for an implementation. It uses 600 point sprites with nearly 60 fps. Increasing the number of point sprites to 3000 lets the framerate drop to around 20 fps.
Friday, December 12, 2008
Free ShaderX Books
Eric Haines provided a home for the three ShaderX books that are now available for free. Thanks so much for this! Here is the URL
http://tog.acm.org/resources/shaderx/
http://tog.acm.org/resources/shaderx/
Thursday, December 11, 2008
iP* programming tip #6
This time we are covering another fixed-function technique used in DirectX 7/8 times: Matrix Palettes support is an extension of OpenGL ES 1.1 that is supported on the iPhone.
It allows the usage of a set of matrices to transform the vertices and the normals. Each vertex has a set of indices into the palette, and a corresponding set of n weights.
The vertex is transformed by the modelview matrices specified by the vertices respective indices. These results are subsequently scaled by the weights of the respective units and then summed to create the eyespace vertex.
A similar procedure is followed for normals. They are transformed by the inverse transpose of the modelview matrix.
The main OpenGL ES functions that support Matrix Palette are
It allows the usage of a set of matrices to transform the vertices and the normals. Each vertex has a set of indices into the palette, and a corresponding set of n weights.
The vertex is transformed by the modelview matrices specified by the vertices respective indices. These results are subsequently scaled by the weights of the respective units and then summed to create the eyespace vertex.
A similar procedure is followed for normals. They are transformed by the inverse transpose of the modelview matrix.
The main OpenGL ES functions that support Matrix Palette are
- glMatrixMode(GL_MATRIX_PALETTE) - Set the matrix mode to palette
- glCurrentPaletteMatrix(n) - Set the currently active palette matrix and loads each matrix in the palette
- To enable vertex arrays
glEnableClientState(MATRIX_INDEX_ARRAY)
glEnableClientState(WEIGHT_ARRAY) - To load the index and weight per-vertex data
glWeightPointer()
glMatrixIndexPointer()
Tuesday, December 9, 2008
Cached Shadow Maps
A friend just asked me about how to design a shadow map system for many lights with shadows. A quite good explanation was given in the following post already in 2003:
Yann Lombard explains on how to pick a light source first that should cast a shadow. He is using distance, intensity, influence and other parameters to pick light sources.
He has a cache of shadow maps that can have different resolutions. His cache solution is pretty generic. I would build a more dedicated cache just for shadow maps.
After having picked the light sources that should cast shadows, I would only constantly update shadows in that cache that change. This depends on if there is an object with a dynamic flag in the shadow view frustum.
If you think about it how it happens when you approach a scene with lights that cast shadows:
1. the lights are picked that are close enough and appropriate to cast shadows -> shadow maps are updated
2. then while we move on, for the lights in 1. we only update shadow maps if there is an object in shadow view that is moving / dynamic; we start than with the next bunch of shadows while the shadows in 1 are still in view
3. and so on.
http://www.gamedev.net/community/forums/viewreply.asp?ID=741199
Yann Lombard explains on how to pick a light source first that should cast a shadow. He is using distance, intensity, influence and other parameters to pick light sources.
He has a cache of shadow maps that can have different resolutions. His cache solution is pretty generic. I would build a more dedicated cache just for shadow maps.
After having picked the light sources that should cast shadows, I would only constantly update shadows in that cache that change. This depends on if there is an object with a dynamic flag in the shadow view frustum.
If you think about it how it happens when you approach a scene with lights that cast shadows:
1. the lights are picked that are close enough and appropriate to cast shadows -> shadow maps are updated
2. then while we move on, for the lights in 1. we only update shadow maps if there is an object in shadow view that is moving / dynamic; we start than with the next bunch of shadows while the shadows in 1 are still in view
3. and so on.
Saturday, December 6, 2008
Dual-Paraboloid Shadow Maps
Here is an interesting post on Dual-Paraboloid Shadow maps. Pat Wilson describes a single pass approach here
http://www.gamedev.net/community/forums/topic.asp?topic_id=517022
This is pretty cool. Culling stuff into the two hemispheres is obsolete here. Other than this the usual comparison between cube maps and dual-paraboloid maps applies:
I think both techniques are equivalent for environment maps .. for shadows you might prefer cube maps; if you want to save memory dual-paraboloid maps is the only way to go.
Update: just saw this article on dual-paraboloid shadow maps:
http://osman.brian.googlepages.com/dpsm.pdf
The basic idea is that you do the WorldSpace -> Paraboloid transformation in the pixel shader during your lighting pass. That avoids having the paraboloid co-ordinates interpolated incorrectly.
http://www.gamedev.net/community/forums/topic.asp?topic_id=517022
This is pretty cool. Culling stuff into the two hemispheres is obsolete here. Other than this the usual comparison between cube maps and dual-paraboloid maps applies:
- the number of drawcalls is the same ... so you do not save on this front
- you loose memory bandwidth with cube maps because in worst case you render everything into six maps that are probably bigger than 256x256 ... in reality you won't render six times and therefore have less drawcalls than dual-paraboloid maps
- the quality is much better for cube maps
- the speed difference is not that huge because dual paraboloid maps use things like texkill or alpha test to pick the right map and therefore rendering is pretty slow without Hierarchical Z.
I think both techniques are equivalent for environment maps .. for shadows you might prefer cube maps; if you want to save memory dual-paraboloid maps is the only way to go.
Update: just saw this article on dual-paraboloid shadow maps:
http://osman.brian.googlepages.com/dpsm.pdf
The basic idea is that you do the WorldSpace -> Paraboloid transformation in the pixel shader during your lighting pass. That avoids having the paraboloid co-ordinates interpolated incorrectly.
iP* programming tip #5
Let's look today at the "pixel shader" level of the hardware functionality. The iPhone Application programming guide says that the application should not use more than 24 MB for textures and surfaces. It seems like those 24 MB are not in video card memory. I assume that all of the data is stored in system memory and the graphics card memory is not used.
Overall the iP* platform supports
Texture filtering is described on page 99 of the iPhone Application programming guide. There is also an extension for anisotropic filtering supported, that I haven't tried.
The pixel shader of the iP* platform is programmed via texture combiners. There is an overview on all OpenGL ES 1.1 calls at
http://www.khronos.org/opengles/sdk/1.1/docs/man/
The texture combiners are described in the page on glTexEnv. Per-Pixel Lighting is a popular example:
glTexEnvf(GL_TEXTURE_ENV,
// N.L
.. GL_TEXTURE_ENV_MODE, GL_COMBINE);
.. GL_COMBINE_RGB, GL_DOT3_RGB); // Blend0 = N.L
.. GL_SOURCE0_RGB, GL_TEXTURE); // normal map
.. GL_OPERAND0_RGB, GL_SRC_COLOR);
.. GL_SOURCE1_RGB, GL_PRIMARY_COLOR); // light vec
.. GL_OPERAND1_RGB, GL_SRC_COLOR);
// N.L * color map
.. GL_TEXTURE_ENV_MODE, GL_COMBINE);
.. GL_COMBINE_RGB, GL_MODULATE); // N.L * color map
.. GL_SOURCE0_RGB, GL_PREVIOUS); // previous result: N.L
.. GL_OPERAND0_RGB, GL_SRC_COLOR);
.. GL_SOURCE1_RGB, GL_TEXTURE); // color map
.. GL_OPERAND1_RGB, GL_SRC_COLOR);
Check out the Oolong example "Per-Pixel Lighting" in the folder Examples/Renderer for a full implementation.
Overall the iP* platform supports
- The maximum texture size is 1024x1024
- 2D texture are supported; other texture formats are not
- Stencil buffers aren’t available
Texture filtering is described on page 99 of the iPhone Application programming guide. There is also an extension for anisotropic filtering supported, that I haven't tried.
The pixel shader of the iP* platform is programmed via texture combiners. There is an overview on all OpenGL ES 1.1 calls at
http://www.khronos.org/opengles/sdk/1.1/docs/man/
The texture combiners are described in the page on glTexEnv. Per-Pixel Lighting is a popular example:
glTexEnvf(GL_TEXTURE_ENV,
// N.L
.. GL_TEXTURE_ENV_MODE, GL_COMBINE);
.. GL_COMBINE_RGB, GL_DOT3_RGB); // Blend0 = N.L
.. GL_SOURCE0_RGB, GL_TEXTURE); // normal map
.. GL_OPERAND0_RGB, GL_SRC_COLOR);
.. GL_SOURCE1_RGB, GL_PRIMARY_COLOR); // light vec
.. GL_OPERAND1_RGB, GL_SRC_COLOR);
// N.L * color map
.. GL_TEXTURE_ENV_MODE, GL_COMBINE);
.. GL_COMBINE_RGB, GL_MODULATE); // N.L * color map
.. GL_SOURCE0_RGB, GL_PREVIOUS); // previous result: N.L
.. GL_OPERAND0_RGB, GL_SRC_COLOR);
.. GL_SOURCE1_RGB, GL_TEXTURE); // color map
.. GL_OPERAND1_RGB, GL_SRC_COLOR);
Check out the Oolong example "Per-Pixel Lighting" in the folder Examples/Renderer for a full implementation.
Friday, December 5, 2008
iP* programming tip #4
All of the source code presented in this series is based on the Oolong engine. I will refer to the examples when it is appropriate so that everyone can look the code up or try it on its own. This tip covers the very simple basics of a iP* app. Here is the most basic piece of code to start a game:
// “View” for games in applicationDidFinishLaunching
// get screen rectangle
CGRect rect = [[UIScreen mainScreen] bounds];
// create one full-screen window
_window = [[UIWindow alloc] initWithFrame:rect];
// create OpenGL view
_glView = [[EAGLView alloc] initWithFrame: rect pixelFormat:GL_RGB565_OES depthFormat:GL_DEPTH_COMPONENT16_OES preserveBackBuffer:NO];
// attach the view to the window
[_window addSubView:_glView];
// show the window
[_window makeKeyAndVisible];
The screen dimensions are retrieved from a screen object. Erica Sadun compares the UIWindow functionality to a TV set and the UIView to actors in a TV show. I think this is a good way to memorize the functionality. In our case EAGLView, that comes with the Apple SDK, inherits from UIView and adds all the OpenGL ES functionality to it. We attach this view than to the window and make everything visible.
Oolong assumes a full-screen window that does not rotate. It is always in widescreen view. The reason for this is that otherwise the accelerometer usage -to drive a camera with the accelerometer for example- wouldn't be possible.
There is a corresponding dealloc method to this code that frees all the allocated resources again.
The anatomy of a Oolong engine example uses mainly two files. A file with "delegate" in the name and the main application file. The main application file has the following methods:
- InitApplication()
- QuitApplication()
- UpdateScene()
- RenderScene()
The first pair of methods do one-time device dependent resource allocations and deallocations, while the UpdateScene() prepares scene rendering and the last method actually does what the name says. If you would like to extend this framework to handle orientation changes, you would add a pair of methods with names like InitView() and ReleaseView() and handle all orientation dependent code in there. Those methods would always been called when the orientation changes -only once- and at the start of the application.
One other basic topic is the usage of C++. In Apple speak this is called Objective-C++. Cocoa Touch wants to be addressed with Obj-C. So native C or C++ code is not possible. For game developers there is lots of existing C/C++ code to be re-used and its usage makes games easier to port to several platforms (quite common to launch an IP on several platforms at once). The best solution to this dilemma is to use Objective-C where necessary and then wrap to C/C++.
If a file has the postfix *.mm, the compiler can handle Objective-C, C and C++ code pieces at the same time to a certain degree. If you look in Oolong for files with such a postfix you will find many of them. There are whitepapers and tutorials available for Objective-C++ that describe the limitations of the approach. Because garbage collection is not used on the iP* device I want to believe that the challenges to make this work on this platform are smaller. Here are a few examples on how the bridge between Objective-C and C/C++ is build in Oolong. In our main application class in every Oolong example we bridge from the Objective-C code used in the "delegate" file to the main application file like this:
// in Application.h
class CShell
{
..
bool UpdateScene();
// in Application.mm
bool CShell::UpdateScene()
..
// in Delegate.mm
static CShell *shell = NULL;
if(!shell->Update()) printf(“Update error\n”);
An example on how to call an Objective-C method from C++ can look like this (C wrapper):
// in PolarCamera.mm -> C wrapper
void UpdatePolarCamera()
{
[idFrame UpdateCamera];
}
-(void) UpdateCamera
{
..
// in Application.mm
bool Cshell::UpdateScene()
{
UpdatePolarCamera();
..
The idea is to retrieve the id for a class and then use this id to address a function in the class from the outside.
If you want to see all this in action, open up the skeleton example in the Oolong Engine source code. You can find it at
Examples/Renderer/Skeleton
Now that we are at the end of this tip I would like to refer to a blog that my friend Canis wrote. He talks about memory management here. This blog entry applies to the iP* platforms quite well:
http://www.wooji-juice.com/blog/cocoa-6-memory.html
// “View” for games in applicationDidFinishLaunching
// get screen rectangle
CGRect rect = [[UIScreen mainScreen] bounds];
// create one full-screen window
_window = [[UIWindow alloc] initWithFrame:rect];
// create OpenGL view
_glView = [[EAGLView alloc] initWithFrame: rect pixelFormat:GL_RGB565_OES depthFormat:GL_DEPTH_COMPONENT16_OES preserveBackBuffer:NO];
// attach the view to the window
[_window addSubView:_glView];
// show the window
[_window makeKeyAndVisible];
The screen dimensions are retrieved from a screen object. Erica Sadun compares the UIWindow functionality to a TV set and the UIView to actors in a TV show. I think this is a good way to memorize the functionality. In our case EAGLView, that comes with the Apple SDK, inherits from UIView and adds all the OpenGL ES functionality to it. We attach this view than to the window and make everything visible.
Oolong assumes a full-screen window that does not rotate. It is always in widescreen view. The reason for this is that otherwise the accelerometer usage -to drive a camera with the accelerometer for example- wouldn't be possible.
There is a corresponding dealloc method to this code that frees all the allocated resources again.
The anatomy of a Oolong engine example uses mainly two files. A file with "delegate" in the name and the main application file. The main application file has the following methods:
- InitApplication()
- QuitApplication()
- UpdateScene()
- RenderScene()
The first pair of methods do one-time device dependent resource allocations and deallocations, while the UpdateScene() prepares scene rendering and the last method actually does what the name says. If you would like to extend this framework to handle orientation changes, you would add a pair of methods with names like InitView() and ReleaseView() and handle all orientation dependent code in there. Those methods would always been called when the orientation changes -only once- and at the start of the application.
One other basic topic is the usage of C++. In Apple speak this is called Objective-C++. Cocoa Touch wants to be addressed with Obj-C. So native C or C++ code is not possible. For game developers there is lots of existing C/C++ code to be re-used and its usage makes games easier to port to several platforms (quite common to launch an IP on several platforms at once). The best solution to this dilemma is to use Objective-C where necessary and then wrap to C/C++.
If a file has the postfix *.mm, the compiler can handle Objective-C, C and C++ code pieces at the same time to a certain degree. If you look in Oolong for files with such a postfix you will find many of them. There are whitepapers and tutorials available for Objective-C++ that describe the limitations of the approach. Because garbage collection is not used on the iP* device I want to believe that the challenges to make this work on this platform are smaller. Here are a few examples on how the bridge between Objective-C and C/C++ is build in Oolong. In our main application class in every Oolong example we bridge from the Objective-C code used in the "delegate" file to the main application file like this:
// in Application.h
class CShell
{
..
bool UpdateScene();
// in Application.mm
bool CShell::UpdateScene()
..
// in Delegate.mm
static CShell *shell = NULL;
if(!shell->Update()) printf(“Update error\n”);
An example on how to call an Objective-C method from C++ can look like this (C wrapper):
// in PolarCamera.mm -> C wrapper
void UpdatePolarCamera()
{
[idFrame UpdateCamera];
}
-(void) UpdateCamera
{
..
// in Application.mm
bool Cshell::UpdateScene()
{
UpdatePolarCamera();
..
The idea is to retrieve the id for a class and then use this id to address a function in the class from the outside.
If you want to see all this in action, open up the skeleton example in the Oolong Engine source code. You can find it at
Examples/Renderer/Skeleton
Now that we are at the end of this tip I would like to refer to a blog that my friend Canis wrote. He talks about memory management here. This blog entry applies to the iP* platforms quite well:
http://www.wooji-juice.com/blog/cocoa-6-memory.html
Wednesday, December 3, 2008
iP* programming tip #3
Today I will cover the necessary files of an iP* application and the folders that potentially hold data on the device from your application.
Every iP* app is sandboxed. That means that only certain folders, network resources and hardware can be accessed. Here is a list of folders that might be affected by your application:
- .app folder holds everything without required hierarchy
- .lproj language support
- Executable
- Info.plist – XML property list holds product identifier > allows communicate with other apps and register with Springboard
- Icon.png (57x57) set UIPrerenderedIcon to true in Info.plist to not receive the gloss / shiny effect
- Default.png … should match game background; no “Please wait” sign ... smooth fade
- XIB (NIB) files precooked addressable user interface classes >remove NSMainNibFile key from Info.plist if you do not use it
- Your files; for example in demoq3/quake3.pak
Every iP* app is sandboxed. That means that only certain folders, network resources and hardware can be accessed. Here is a list of folders that might be affected by your application:
- Preferences files are in var/mobile/Library/Preferences based on the product identifier (e.g. com.engel.Quake.plist); updated when you use something like NSUserDefaults to add persistance to game data like save and load
- App plug-in /System/Library (not available)
- Documents in /Documents
- Each app has a tmp folder
- Sandbox spec e.g. in /usr/share/sandbox > don’t touch
Tuesday, December 2, 2008
HLSL 5.0 OOP / Dynamic Shader Linking
I just happen to bump into a few slides on the new HLSL 5.0 syntax. The slides are at
http://www.microsoft.com/downloads/details.aspx?FamilyId=32906B12-2021-4502-9D7E-AAD82C00D1AD&displaylang=en
I thought I comment on those slides because I do not get the main idea. The slides mention a combinatiorial explosion for shaders. They show on slide 19 three arrows that go in all three directions. One is called Number of Lights, another one Environmental Effects and the third one is called Number of Materials.
Regarding the first one: even if one has never worked on a game, everyone knows the words Deferred Lighting. If you want many lights you want to do the lighting in a way that the same shader is used for each light type. Assuming that we have a directional, point and spot light this brings me to three shaders (I actually use currently three but I might increase this to six).
One arrow talks about Environmental Effects. Most environmental effects nowadays are part of PostFX or a dedicated sky dome system. That adds two more shaders.
The last arrow says Number of Materials. Usually we have up to 20 different shaders for different materials.
This brings me to -let's say 30 - 40- different shaders in a game. I can't consider this a combinatorial explosion so far.
On slide 27 it is mentioned that the major driving point for introducing OOP is the dynamic shader linkage. It seems like there is a need for dynamic shader linkage because of the combinatorial explosion of the shaders.
So in essence the language design of the HLSL language is driven by the fact that we have too many shaders and someone assumes that we can't cope with the shear quantity. To fix this we need dynamic shader linkage and to make this happen we need OOP in HLSL.
It is hard for me to follow this logic. It looks to me like we are doing a huge step back here. Not focusing on the real needs and adding code bloat.
Dynamic shader linkers are proven to be useless since a long time in game development; the previous attempts in this area were buried with DirectX 9 SDKs. The reason for this is that they do not allow to hand-optimize code which is a very important thing to do to make your title competitive. As soon as you change one of the shader fragments this has impact on the performance of other shaders. Depending on if you hit a performance sweetspot or not you can get a very different performance out of graphics cards.
Because the performance of your code base becomes less predictable, you do not want to use a dynamic shader linker if you want to create competitive games in the AAA segment.
Game developers need more control over the performance of the underlying hardware. We are already forced to use NV API and other native APIs to ship games on the PC platform with acceptable feature set and performance (especially SLI configs) because DirectX does not expose the functionality. For the DirectX 9 platform we look into Cuda and Cal support for PostFX.
This probably does not have much impact on the HLSL syntax but in general I would prefer having more abilities to squeeze out more performance from graphics cards over any OOP extension that does not sound like it increases performance. At the end of the day the language is a tool to squeeze out as much performance as possible from the hardware. What else do you want to do with it?
iP* programming tip #2
Today's tip will deal with the setup of your development environment. As a Mac newbie I was having a hard time to get used to the environment more than a year ago -when I started Mac development- and I still suffer under windowitis. I know that Apple does not want to copy MS's Visual Studio but most people who are used to work with Visual Studio would put that on their holiday wishlist :-)
Here are a few starting points to get used to the environment:
For everyone who prefers hotkeys to start applications you might check out Quicksilver. Automatically hiding and showing the Dock gives you more workspace. If you are giving presentations about your work, check out Stage Hand for the iPod touch / iPhone.
For reference you should have POWERVR SDK for Linux downloaded. It is a very helpful reference regarding the MBX chip in your target platforms.
Not very game or graphics programming related but very helpful is Erica Sadun's book "The iPhone Developer's Cookbook". She does not waste your time with details you are not interested in and comes straight to the point. Just reading the first section of the book is already pretty cool.
You want to have this book if you want to dive into any form of Cocoa interface programming.
The last book I want to recommend is Andrew M. Duncan's "Objective-C Pocket Reference". I have this usually lying on my table if I stumble over Objective-C syntax. If you are a C/C++ programmer you probably do not need more than this. There are also Objective-C tutorials on the iPhone developer website and on the general Apple website.
If you have any other tip that I can add to the website I would mention it with your name.
Here are a few starting points to get used to the environment:
- To work in one window only, use the "All-in-One" mode if you miss Visual Studio (http://developer.apple.com/tools/xcode/newinxcode23.html)
You have to load Xcode, but not load any projects. Go straight to Preferences/General Tab, and you'll see "Layout: Default". Switch that to "Layout: All-In-One". Click OK. Then, you can load your projects. - Apple+tilde – cycle between windows in the foreground
- Apple+w - closes the front window in most apps
- Apple+tab – cycle through windows
For everyone who prefers hotkeys to start applications you might check out Quicksilver. Automatically hiding and showing the Dock gives you more workspace. If you are giving presentations about your work, check out Stage Hand for the iPod touch / iPhone.
For reference you should have POWERVR SDK for Linux downloaded. It is a very helpful reference regarding the MBX chip in your target platforms.
Not very game or graphics programming related but very helpful is Erica Sadun's book "The iPhone Developer's Cookbook". She does not waste your time with details you are not interested in and comes straight to the point. Just reading the first section of the book is already pretty cool.
You want to have this book if you want to dive into any form of Cocoa interface programming.
The last book I want to recommend is Andrew M. Duncan's "Objective-C Pocket Reference". I have this usually lying on my table if I stumble over Objective-C syntax. If you are a C/C++ programmer you probably do not need more than this. There are also Objective-C tutorials on the iPhone developer website and on the general Apple website.
If you have any other tip that I can add to the website I would mention it with your name.
Update: PpluX send me the following link:
He describes here how he disables deep sleep mode and modifies the usage of spaces.
The next iP* programming tip will be more programming related ... I promise :-)
The next iP* programming tip will be more programming related ... I promise :-)