tag:blogger.com,1999:blog-398682525365778708.post6898646628118198725..comments2023-03-03T00:24:53.232-08:00Comments on Diary of a Graphics Programmer: Calculating Screen-Space Texture Coordinates for the 2D Projection of a VolumeWolfgang Engelhttp://www.blogger.com/profile/11031097395025597662noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-398682525365778708.post-84270404026549880602008-09-10T16:34:00.000-07:002008-09-10T16:34:00.000-07:00Oh and I am sure you know, that consoles provide a...Oh and I am sure you know, that consoles provide a render state (ones that are D3D9 based anyway) to turn off this annoying half pixel thing.Unknownhttps://www.blogger.com/profile/18111213145214467262noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-73398958700491767312008-09-10T16:09:00.000-07:002008-09-10T16:09:00.000-07:00Hi Wolfgang,The "w" divide is done automatically i...Hi Wolfgang,<BR/><BR/>The "w" divide is done automatically in the texture2DProj call. I believe the D3D version is called tex2Dproj.Unknownhttps://www.blogger.com/profile/18111213145214467262noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-34160274163792021462008-09-10T15:33:00.000-07:002008-09-10T15:33:00.000-07:00Sorry I just re-read that comment and realized non...Sorry I just re-read that comment and realized none of those thoughts were really complete. I was distracted.<BR/><BR/>For the g-buffer I am storing world space normals using spherical coordinates. For the 8:8:8:8 target case, if you store the normal.xy and reconstruct z, you need to know the sign of z, since it's really +/-sqrt(1 - normal.xy *normal.xy). So before I switched to spherical storage, I had to store stuff like this:<BR/>8:8:8:8<BR/>normal.xy_8_8|sign(z)_1|depthHi_7|depthLo_8<BR/><BR/>This added unpack time to retrieving the depth value, and the normal value. It also only gave me 15 bits to store depth, instead of 16. Switching to spherical coordinates got me back that bit, and removed another op from getting the depth value.<BR/><BR/>The actual format for the spherical is generated like this:<BR/>inline float2 cartesianToSpGPU( in float3 normalizedVec )<BR/>{<BR/> float atanYX = atan2( normalizedVec.y, normalizedVec.x );<BR/> float2 ret = float2( atanYX / PI, normalizedVec.z );<BR/><BR/> return (ret + 1.0) * 0.5;<BR/>}<BR/><BR/>and retrieved like this:<BR/>inline float3 spGPUToCartesian( in float2 spGPUAngles )<BR/>{<BR/> float2 expSpGPUAngles = spGPUAngles * 2.0 - 1.0;<BR/> float2 scTheta;<BR/><BR/> sincos( expSpGPUAngles.x * PI, scTheta.x, scTheta.y );<BR/> float2 scPhi = float2( sqrt( 1.0 - expSpGPUAngles.y * expSpGPUAngles.y ), expSpGPUAngles.y );<BR/><BR/> // Renormalization not needed<BR/> return float3( scTheta.y * scPhi.x, scTheta.x * scPhi.x, scPhi.y );<BR/>}<BR/><BR/>It is slightly more expensive to re-construct than the cartesian, but I think (this may not be true) that because the light shaders use the surface normal last, the GPU can do the work whenever it has time. <BR/><BR/>On lower end cards, the atan2 and sincos functions take longer. Some of the ATI boards with the unified shaders assign 4 shader cores which can't do trancendental functions, and 1 which can per ALU. NVidia cards have 4 cores per ALU, and each can do all ops. I encoded sincos and atan2 into A8 lookup textures for that case, and it works better.Pat Wilsonhttps://www.blogger.com/profile/09248944074292458916noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-2925461779884402542008-09-10T15:04:00.000-07:002008-09-10T15:04:00.000-07:00Wolfgang,(Pat Wilson from GarageGames)It doesn't r...Wolfgang,<BR/>(Pat Wilson from GarageGames)<BR/><BR/>It doesn't require a dedicated depth buffer. I am using these formats for g-buffers:<BR/><BR/>8:8:8:8<BR/>normal.theta|normal.phi|depthHi|depthLo<BR/><BR/>16:16:16:16<BR/>normal.theta|normal.phi|foo|depth<BR/><BR/>The reason I chose this method for world space reconstruction is because it is very cheap, requiring only 1 mad in the case of a FS quad.<BR/><BR/>The z-data that is stored is also very good because it is linear, and it is in the range 0..1 where 1 is zFar in camera space. I like integer formats over FP16 formats for the G-buffer because I can control the ranges of the data.<BR/><BR/>I haven't done enough profiling to know for sure, but I think that using an 8:8:8:8 g-target may hit light shader performance significantly (it is slower on high-bandwidth cards, but not as much on low-bandwidth cards). The first thing the light does is sample from the G-buffer, but then every subsequent thing that it does is dependent on knowing the depth.Pat Wilsonhttps://www.blogger.com/profile/09248944074292458916noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-11420550061720433812008-09-10T12:44:00.000-07:002008-09-10T12:44:00.000-07:00Hi Pat,I think this is the Crytek approach that wa...Hi Pat,<BR/>I think this is the Crytek approach that was covered in a SIGGRAPH 2007 session by Carsten Wenzel. This looks very cool to me. You have to generate a dedicated depth buffer for this?<BR/><BR/>- WolfgangWolfgang Engelhttps://www.blogger.com/profile/11031097395025597662noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-58980154415422643612008-09-10T12:38:00.000-07:002008-09-10T12:38:00.000-07:00Hi Damian,Yes this is without the DX9 offset the s...Hi Damian,<BR/>Yes this is without the DX9 offset the same. So you can consider it trivial but in my specific case we forgot about the half pixel offset :-~ so I had to figure out why there is light leaking around a person :-) (we use this also to fetch shadow maps that are in screen-space).<BR/><BR/>- Wolfgang<BR/><BR/>BTW: didn't you forget the divide by z? I would think there is something like<BR/><BR/>projectSpace.xy /= projectSpace.w<BR/><BR/>in there as well.Wolfgang Engelhttps://www.blogger.com/profile/11031097395025597662noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-35552718726534311292008-09-10T09:51:00.000-07:002008-09-10T09:51:00.000-07:00I am doing my world space reconstruction using mos...I am doing my world space reconstruction using mostly comments found here: http://forum.beyond3d.com/showthread.php?t=45628<BR/><BR/>To store, in HLSL:<BR/>float3 wsPos = IN.pos.xyz / IN.pos.w;<BR/>float depth = dot( vEye, wsPos - eyePos );<BR/><BR/>Where IN.pos comes from the VShader and is:<BR/>OUT.pos = mul( objToWorldMat, IN.position );<BR/><BR/>vEye is a shader constant, and is the world-space view-vector normalized to 1/zFar<BR/><BR/>eyePos is a shader constant, and is the world-space eye position<BR/><BR/>I am storing depth in 16 bits as an integer and this seems to be plenty. <BR/><BR/>To reconstruct:<BR/>float3 worldPos = eyePos + eyeRay * depth;<BR/><BR/>eyePos is a shader constant, world-space eye position.<BR/><BR/>eyeRay is:<BR/><BR/>-For a full-screen quad:<BR/><BR/>Calculate in vertex shader:<BR/>OUT.wsEyeRay = float4( IN.wsFrustCoord - eyePos, 1.0 );<BR/><BR/>Calculate in pixel shader:<BR/>OUT.wsEyeRay = float4( IN.normal - eyePos, 1.0 );<BR/><BR/>In the vertex shader, it is a full screen quad, and each vertex has the world-space co-ordinate of the far-frustum plane. I am calculating like this:<BR/><BR/> Point3F farFrustumCorners[4];<BR/> farFrustumCorners[0].set( frustLeft * zFarOverNear, zFar, frustBottom * zFarOverNear );<BR/> farFrustumCorners[1].set( frustLeft * zFarOverNear, zFar, frustTop * zFarOverNear );<BR/> farFrustumCorners[2].set( frustRight * zFarOverNear, zFar, frustTop * zFarOverNear );<BR/> farFrustumCorners[3].set( frustRight * zFarOverNear, zFar, frustBottom * zFarOverNear );<BR/><BR/> MatrixF camToWorld = thisFrame.worldToCamera;<BR/> camToWorld.inverse();<BR/><BR/> for( int i = 0; i < 4; i++ )<BR/> camToWorld.mulP( farFrustumCorners[i] );<BR/><BR/>-For convex geometry:<BR/><BR/>In Pixel shader:<BR/>float3 eyeRay = getDistanceVectorToPlane( negFarPlaneDotEye, IN.wsPos.xyz / IN.wsPos.w, farPlane );<BR/><BR/>'negFarPlaneDotEye' is a shader constant which is:<BR/><BR/>-dot( worldSpaceFarPlane, eyePosition )<BR/><BR/>'farPlane' is a shader constant which is the world-space far-plane.<BR/><BR/>This function is from that thread:<BR/>inline float3 getDistanceVectorToPlane( in float negFarPlaneDotEye, in float3 direction, in float4 plane )<BR/>{<BR/> float denum = dot( plane.xyz, direction.xyz );<BR/> float t = negFarPlaneDotEye / denum;<BR/><BR/> return direction.xyz * t;<BR/>}<BR/><BR/>-----<BR/><BR/>This works well for me. I am sure it can be optimized further.Pat Wilsonhttps://www.blogger.com/profile/09248944074292458916noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-35265971029032640142008-09-10T00:34:00.000-07:002008-09-10T00:34:00.000-07:00You should also mention that the half pixel offset...You should also mention that the half pixel offset is specific to D3D9 as D3D10/OpenGL/consoles do not need to do this.<BR/><BR/>Code I have been using for years to do this (seen in my Light Index Deferred Rendering code)<BR/><BR/>Vertex Shader:<BR/> projectSpace = gl_ModelViewProjectionMatrix * gl_Vertex; <BR/> gl_Position = projectSpace;<BR/> projectSpace.xy = (projectSpace.xy + vec2(projectSpace.w)) * 0.5;<BR/><BR/>Fragment shader:<BR/><BR/> vec4 texValue = texture2DProj( TextureID, projectSpace);Unknownhttps://www.blogger.com/profile/18111213145214467262noreply@blogger.com