Calculating screen space texture coordinates for the 2D projection of a volume is more complicated than for an already transformed full-screen quad. Here is a step-by-step approach on how to achieve this:
1. Transforming position into projection space is done in the vertex shader by multiplying the concatenated World-View-Projection matrix.
2. The Direct3D run-time will now divide those values by Z; stored in the W component. The resulting position is then considered in clipping space, where the x and y value is clipped to the [-1.0, 1.0] range.
xclip = xproj / wproj
yclip = yproj / wproj
3. Then the Direct3D run-time transforms position into viewport space from the value range [-1.0, 1.0] to the range [0.0, ScreenWidth/ScreenHeight].
xviewport = xclipspace * ScreenWidth / 2 + ScreenWidth / 2
yviewport = -yclipspace * ScreenHeight / 2 + ScreenHeight / 2
This can be simplified to:
xviewport = (xclipspace + 1.0) * ScreenWidth / 2
yviewport = (1.0 - yclipspace ) * ScreenHeight / 2
The result represents the position on the screen. The y component need to be inverted because in world / view / projection space it increases in the opposite direction than in screen coordinates.
4. Because the result should be in texture space and not in screen space, the coordinates need to be transformed from clipping space to texture space. In other words from the range [-1.0, 1.0] to the range [0.0, 1.0].
u = (xclipspace + 1.0) * 1 / 2
v = (1.0 - yclipspace ) * 1 / 2
5. Due to the texturing algorithm used by Direct3D, we need to adjust texture coordinates by half a texel:
u = (xclipspace + 1.0) * ½ + ½ * TargetWidth
v = (1.0 - yclipspace ) * ½ + ½ * TargetHeight
Plugging in the x and y clipspace coordinates results from step 2:
u = (xproj / wproj + 1.0) * ½ + ½ * TargetWidth
v = (1.0 - yproj / wproj ) * ½ + ½ * TargetHeight
6. Because the final calculation of this equation should happen in the vertex shader results will be send down through the texture coordinate interpolator registers. Interpolating 1/ wproj is not the same as 1 / interpolated wproj. Therefore the term 1/ wproj needs to be extracted and applied in the pixel shader.
u = 1/ wproj * ((xproj + wproj) * ½ + ½ * TargetWidth * wproj)
v = 1/ wproj * ((wproj - yproj) * ½ + ½ * TargetHeight* wproj)
The vertex shader source code looks like this:
Float4 vPos = float4(0.5 * (float2(p.x + p.w, p.w – p.y) + p.w * inScreenDim.xy), pos.zw)
The equation without the half pixel offset would start at No. 4 like this:
u = (xclipspace + 1.0) * 1 / 2
v = (1.0 - yclipspace ) * 1 / 2
Plugging in the x and y clipspace coordinates results from step 2:
u = (xproj / wproj + 1.0) * ½
v = (1.0 - yproj / wproj ) * ½
Moving 1 / wproj to the front leads to:
u = 1/ wproj * ((xproj + wproj) * ½)
v = 1/ wproj * ((wproj - yproj) * ½)
Because the pixel shader is doing the 1 / wproj, this would lead to the following vertex shader code:
Float4 vPos = float4(0.5 * (float2(p.x + p.w, p.w – p.y)), pos.zw)
All this is based on a response of mikaelc in the following thread:
Lighting in a Deferred Renderer and a response by Frank Puig Placeres in the following thread:
Reconstructing Position from Depth Data