The idea is to store the paritial derivate of the normal in two channels of the map like this
dx = (-nx/nz);
dy = (-ny/nz);
Then you can reconstruct the normal like this:
nx = -dx;
ny = -dy;
nz = 1;
normalize(n);
The advantage is that you do not have to reconstruct Z, so you can skip one instruction in each pixel shader that uses normal maps.
This is especially cool on the PS3 while on the XBOX 360 you can also create a custom texture format to let the texture fetch unit do the scale and bias and save a cycle there.
More details can be found at
Look for Partial Derivative Normal Maps.
But you can also get the PS3 hardware to do the scale & bias with a special texture instruction in one cycle, whereas with the normalize it will be two cycles.
ReplyDeleteThis is the one cycle I mentioned ... with Partial Derivative Normal maps this cylce is obsolete.
ReplyDeleteYes, PS3 can do a tex_bx2 (scale and bias texture fetch) instruction. If your normals are half precision you can do a texture fetch, scale and bias, and normalize all in one cycle.
ReplyDeleteHowever, the downside of doing so is that you need your normals in three components of your texture. DXT1 doesn't give enough quality and ARGB is too large.
In order to not sacrifice too much quality or too much texture size, we can put the XY in either a G8B8 texture (higher quality) or DXT5 (better compression) and unpack the Z value. I believe the point of partial derivative normal maps is that it uses one less cycle for unpacking the z value instead of using the standard sqrt(1 - x*x - y*y) to find the z value.
The other interesting insight I got was the use of DXT5 textures. This would give 8 bits per pixel, with a 4 steps 6-bit interpolated X and an 8 steps 8-bit interpolated Y. I suppose the Red and Blue channels are thrown away (at least for the purpose of unpacking the normal) I haven't tried this, I wonder how the quality holds up.
The Xbox 360 can support compressed normal maps via DXN format, which would be much more preferabble.
Could you please tell me how you do the "custom texture format" on Xbox 360? I haven't found anything in the SDK documentation about this.
ReplyDeleteThx in advance
<<<
ReplyDeleteThe other interesting insight I got was the use of DXT5 textures. This would give 8 bits per pixel, with a 4 steps 6-bit interpolated X and an 8 steps 8-bit interpolated Y. I suppose the Red and Blue channels are thrown away (at least for the purpose of unpacking the normal) I haven't tried this, I wonder how the quality holds up.
<<<
This is a pretty old trick as far as I understand. John Carmack came up with it for Doom.
Custom Texture Formats: yes nothing in the documentation :-) ... there is a Gamefest 2008 presentation that talks about this. It is available on the MS website.
David E.: you are write it is not the scale and bias but the reconstruction of Z that you save ... my mistake. Thanks for clarifying this.
ReplyDeleteThanks, I did find a quite a few references on the web for using DXT5 for normal map compression after I posted.
ReplyDeleteI'm glad you brought this to my attention. I use partial derivatives for quick normal calculation for height fields, so it seems obvious that you can use them for tangent space normal maps.
It would seem this technique is only able to reconstruct a subset of the possible normals. For instance, how you could store the (1, 0, 0) normal?
ReplyDeleteIf the scale-bias we're talking about is a standard x2-1, then the smallest z value possible is 1/sqrt(3) = 0.577. That leaves out a rather substantial chunk of the possible normals.
That was my interpretation as well, Humus. Whenever the normal skewed enough that Z is no longer that largest magnitude component, the whole thing just completely fails. Someone suggested a similar technique in a gamedev thread, and I observed the same thing there. But no one else seems to have pointed it out, or agreed ... so I'm wondering if I'm missing something.
ReplyDeleteBrian and Humus, I think you are both right, about the limited possible range of the normals using this method. I haven't tried it yet, but there are definitely ways around that problem.
ReplyDeleteProbably the simplest way is to set Z to a smaller value before normalization. For example, set Z to 0.5 instead of 1.0. This will increase the range of the normals, but you may lose a little quality. You won't be able to get (1,0,0) with this method, but it is not very likely you will need to do so either. The majority of normal values are likely going to be closer to (0,0,1) than toward the edges. So this technique will likely work in the majority of the cases.
It may be possible to calculate the normal range when generating a partial derivative normal map, and calculate the optimal pre-normalized z value that is passed to the shader via uniform parameter.
Setting the z value this way, may also have the side effect that you could increase the quality of the normal map since it will help you more fully use your bits provided in your texture format.
Thx for the information on the texture formats, it's a very interesting solution.
ReplyDeleteI hope that someone can help me with another big problem.
I render HDR into a 10:10:10:2 EDRAM rendertarget on the xbox and I have a few dark scenes. I get very bad banding effects because of the little changes in colors and the 10 bits per channel.
Could anyone tell me how to get around this easily? I tried to modify the colors so the dark areas become very precise but then the bright particle effects get bandy.
Thx in advance.
Humus and Brian are both correct. You do sacrifice a portion of your normal range in this encoding. We only store 0-1 for each component and so can't represent anything which falls outside of a 45 degree cone around (0, 0, 1). There's a trade off there, obviously you can represent more range at the cost of lower precision if you choose. You'll never be able to store (1, 0, 0) though.
ReplyDeleteIt hasn't been said here but the reason we use this encoding (and are willing to sacrifice the range) over the standard DXT encoding is that it's so much nicer for applying "detail" normal maps on top of existing base normal maps. So say you have two normals, your base normal and your detail normal, ideally what you want is to apply your detail normal to the hemisphere defined by your base normal (as opposed to the hemisphere defined by the tangent space at that point). The math to do this with standard xyz normals is a bit more than what you really want to be doing in a fragment program. However by storing in this representation, we can achieve this by simply adding the partial derivatives together before the normalization.
Why the minus sign in both the creation and reconstruction phases? Can't you save a cycle there if you just do:
ReplyDeleteCreation
dx = nx/nz;
dy = ny/nz;
Reconstruction
nx = dx;
ny = dy;
nz = 1;
normalize(n);
In that PDF they say "Better yet, it allows the detail normal map to be combined with themain normal map using only 3 instructions (versus ~10 with thestandard encoding)."
ReplyDeleteDo you have any idea what 3 instructions they are referring to? How do they combine their detail normal map with their regular normal map?