tag:blogger.com,1999:blog-398682525365778708.post4692718683222767452..comments2024-09-11T01:09:56.184-07:00Comments on Diary of a Graphics Programmer: Edge Detection TrickWolfgang Engelhttp://www.blogger.com/profile/11031097395025597662noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-398682525365778708.post-1141829999685265272010-03-23T22:27:44.645-07:002010-03-23T22:27:44.645-07:00Today I found that 8 + 8 bits for depth checking i...Today I found that 8 + 8 bits for depth checking is not detail enough...<br /><br />For the normal checking, 2 channels are not enough to use the length trick; it has to be 3 channels...Jayhttps://www.blogger.com/profile/05307417533099324352noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-79104075244869672522010-03-23T00:07:20.754-07:002010-03-23T00:07:20.754-07:00On second thought, we can also store normal and de...On second thought, we can also store normal and depth on separate buffers. In that way we don't need the point sampler.<br /><br />bool bEdge<br />= clip( length( normalLinear.xyz ) - 0.8 )<br />&& clip( length( depthLinear.xy ) - 0.8 );<br /><br />*PS: I'm sorry for many comments.Jayhttps://www.blogger.com/profile/05307417533099324352noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-21699662013373513782010-03-22T23:48:07.169-07:002010-03-22T23:48:07.169-07:00I may need more time to elaborate this idea, but l...I may need more time to elaborate this idea, but let me try to put it here.<br /><br />I think we can also use the characteristics of the normal on depth values, if we store the 1 dimensional depth value as 2 dimensional normalized value.<br /><br />The calculation will be like this:<br />x = ( 1 - depth )<br />y = depth<br /><br />Then we normalize it in order to make the length to be one: normalize( float2( x, y ) ).<br /><br /><br />For example, we have 4 depth values: 0, 1, 0.5, 0.5. The normalized values for each will be like this: ( 1, 0 ), ( 0, 1 ), ( 0.7, 0.7 ), and ( 0.7, 0.7 ).<br /><br />The averaged value is now ( 0.6, 0.6 ) whose length is 0.85. The length decreased from one because they point different directions in 2 dimensional space.<br /><br />To get all of these together, we need to store normal values on r and g channels and depth values on b and a channels.<br /><br />Please let me know if this doesn't seem to work.Jayhttps://www.blogger.com/profile/05307417533099324352noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-3637816944699577042010-03-22T23:07:57.781-07:002010-03-22T23:07:57.781-07:00It is quit interesting idea. Now I see what I was ...It is quit interesting idea. Now I see what I was missing.<br />:-)<br /><br />I found that since the nature of normal keeps the length to be one, the averaged length is close to one when 4 normal values are very similar. If those normals point different directions the length must be decreased. Thus, "clip( abs(L-P)-epsilon )" seems to work fine.<br /><br />However, depth values does not hold the characteristic. It is possible that the randomly picked point is close to the average although depth values actually vary. For example, 0, 5, 10, and 5 will yield average 5.<br /><br />Can we improve this depth problem?Jayhttps://www.blogger.com/profile/05307417533099324352noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-34868455219331824512010-03-22T22:41:11.071-07:002010-03-22T22:41:11.071-07:00I want to correct my hastened comment. The value 5...I want to correct my hastened comment. The value 50% was not proper. I was thinking about something else.<br /><br />On my second thought, it may work well.Jayhttps://www.blogger.com/profile/05307417533099324352noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-44065754966256572852010-03-22T22:25:17.862-07:002010-03-22T22:25:17.862-07:00Benualdo, you got my point already. It is actually...Benualdo, you got my point already. It is actually nothing new, because it is already well explained on ShaderX7.<br /><br />I just applied it on light pre-pass.<br /><br />BTW, randomly picking one point out of 4 would not give us a good result. It will cause 50% false-negative on actual edge pixels.Jayhttps://www.blogger.com/profile/05307417533099324352noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-89168681153356797582010-03-22T14:03:35.235-07:002010-03-22T14:03:35.235-07:00Jay I'm not sure to understand some details in...Jay I'm not sure to understand some details in the way you use stencil: are you updating the stencil only once with edge information and using geometry and depth test to select the pixels that need lighting for each light? Or do you sample the edge value from normal texture during lighting for each light? before each light?<br /><br />If this is not what you're already doing, you can do this way:<br />- resolve multisampled buffers,<br />- write S=0x01 where edges were found<br /><br />Then for each light:<br />- write S=2 where light is visible (ref=0x03 with write_mask= 0x02) using a geometry proxy of your light volume with depth test enabled<br />- run non-MSAA lighting shader and clip if multisampling is needed, stencil writes 0 with write mask = 0x02 (removes bit 2 from stencil and keeps bit 0x01 on edges)<br />- run MSAA lighting shader supersampling version where stencil == 3<br />- clear stencil bit 0x02 (write 0 with write_mask = 0x02) for the next lights if needed.<br /><br />With (stencil == 0x03) being your stencil "early out" test and the Ref value (==3) never changing, both pass are optimized by stencil early out.Anonymoushttps://www.blogger.com/profile/16808322273600186441noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-58113408910915943462010-03-22T10:14:12.714-07:002010-03-22T10:14:12.714-07:00The texture coordinates are the same for both the ...The texture coordinates are the same for both the POINT and the LINEAR samples. It is just the middle of the 2x1 or 2x2 samples, that is the texture coordinates you would get without MSAA. When POINT filtering is enabled but the texture coordinate is in the exact middle of four texels and the texture has no mipmaps, then the graphic card returns *ONE* of the 4 neightbours texels and we don't care witch one for the MSAA edge detection.<br /><br />The volume texture mipmap trick is used instead of ddx/ddy just because it is faster to let the hardware do the job instead of adding more instructions to do it. (It's the same reason why we should use alpha test instead of clip() when possible). Compare generated microcode with NVShaderPerf and you will see the difference.<br /><br />The sign of the clip test is negative because we want to clip when lower mipmap was choosen (because of texcoords moving fast) and then the value is 1. As hlsl clip(X) discards texels if any of the components of X is negative I think it's ok because if any of the tex3D returns 1 instead of 0 then (-edge1-edge2) will be negative.<br /><br />rem: you can use signed tex to reduce the number of instruction in PS.Anonymoushttps://www.blogger.com/profile/16808322273600186441noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-81267233530524768642010-03-22T09:29:59.799-07:002010-03-22T09:29:59.799-07:00I like to share my trivial experience that I had r...I like to share my trivial experience that I had recently. I wish I can get any comments on this or hopefully I can fix it earlier if it has any faults.<br /><br />For the case of 4xMSS, we need to sample 4 times anyway, in order to find edge pixels. If we have a separate buffer for depth, we need total 8 times of texture fetching. This edge detection is expensive.<br /><br />Therefore, we need to split the edge detection step from each light rendering to separate one step.<br /><br />Let's say previously we rendered normal and depth first and rendered each light on the screen, regardless multisampling.<br /><br />We now need to render the normal and depth first, and then we need to update a stencil buffer to get the edge information on it, which may requires 8 times fetching as described.<br />With the stencil, we render two times per light: one for edge and another for non-edge. For the edge part we need to calculate light value per sampling point, which needs 4 times calculations and average of them. For non-edge part, we can use linear sampling on the middle of the pixel and do the light calculation only one time.<br /><br />This way we don't need to do the expensive edge detecting per light.<br /><br />My final decision for the edge detection is to use the centroid trick. Although it did not give me a perfect result on PS3, 2/3 of edges were correctly detected. It required only 1 time of texture fetching to update edge information on stencil buffer; otherwise it could be 8 times.<br /><br />When I render normal buffer, I use one whole channel for edge information. If the centroid value differs from the non-centroid value, store 1; otherwise zero.<br />As the edge detection step, sample the resolved-MSSA normal value and if the edge channel is bigger than zero, update stencil; otherwise discard it in order not to update stencil.<br /><br />This way requires stencil buffer. Thus, it may not be practical if depth information is packed on the normal buffer.Jayhttps://www.blogger.com/profile/05307417533099324352noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-28361278964700593562010-03-22T00:50:24.057-07:002010-03-22T00:50:24.057-07:00I found the depth part was already mentioned.
Sorr...I found the depth part was already mentioned.<br />Sorry. :-)<br /><br />*PS: 8bits seems too less for the position reconstruction though.Jayhttps://www.blogger.com/profile/05307417533099324352noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-57203106517814245332010-03-22T00:23:58.138-07:002010-03-22T00:23:58.138-07:00I am still not sure how it can work....
The idea ...I am still not sure how it can work....<br /><br />The idea suggests to sample two points; one with linear and another with point. I guess the position of the linear sampling should be on the middle of a pixel to get the average value, while the point sampling should be at one of the sampling positions on the MSAA surface.<br /><br />On 4xMSAA surface, there will be 4 sampling points. I wonder which one I should sample with the point sampler.<br /><br />My best guess is that the assumption of the idea is to use 2xMSAA not 4xMSAA.<br /><br />The calculation clip(-abs(L-P)+epsilon) doesn't detect edge but detect non-edge. It may need to be clip( abs(L-P)-epsilon ), instead.<br /><br />Please also note that we cannot decide whether a pixel includes edge or not only with normal values. Even if normal values from each sampling point are exactly the same, it doesn't mean it is a non-edge pixel. We also need to check depth values.<br /><br />Please let me know what I'm missing here.Jayhttps://www.blogger.com/profile/05307417533099324352noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-9593120107372829742010-03-21T19:47:47.226-07:002010-03-21T19:47:47.226-07:00And why you can't just use ddx/ddy on the colo...And why you can't just use ddx/ddy on the color, instead of doing this mipmap trick?DEADC0DEhttps://www.blogger.com/profile/01477408942876127202noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-30795638951841806892010-03-20T17:04:44.392-07:002010-03-20T17:04:44.392-07:00For the epsilon bias optimization to work with pre...For the epsilon bias optimization to work with previous trick the linear depth value should be also in the same texture (typically xyz= normal.xyz and w = linear depth) or we could fall into the 'almost same normal but very different Z' case. (I forgot it in previous post but on PC it is almost always the case anyway)<br /><br />I have another funny edge detection trick I used one year ago for antialiasing on PS3. It was not for MSAA but for some kind of EDAA for forward renderer that needed edge detection te be done onto the final color buffer.<br /><br />Bind a small (preferably swizzled and DXT1) volume texture with 0/255 into mipmap zero and 255/255 into other mipmap levels then use rgb values from the backbuffer as 3D texture coordinates. (both textures can be read with point filtering)<br /><br />PS_OUTPUT PS_EdgeDetectVolumeTex(...)<br />{<br />half4 color1 = tex2D(backbuffer, uv.zw);<br />half4 color2 = tex2D(backbuffer, uv.xy);<br /><br />half edge1 = tex3D(volumeTex, color1.rgb);<br />half edge2 = tex3D(volumeTex, color2.rgb);<br />return half4(color.rgb, -edge1-edge2);<br />}<br /><br />and enable alpha test with alpha > 0 (it saves one TEXKILL shader instruction) so that if edge1 or edge2 is not 0/255 then the texel is discarded.<br /><br />The trick for those who didn't get it has to do with how the hardware selects the mipmap level. In our case<br />the mipmap level will depend on "how fast" the rgb values are changing into a 2x2 texel quad. Is is done twice so that the result is accurate else it couls miss some edges between two texels into two different 2x2 texel quads.Anonymoushttps://www.blogger.com/profile/16808322273600186441noreply@blogger.comtag:blogger.com,1999:blog-398682525365778708.post-20687298079837032782010-03-20T16:55:58.282-07:002010-03-20T16:55:58.282-07:00This comment has been removed by the author.Anonymoushttps://www.blogger.com/profile/16808322273600186441noreply@blogger.com