There is a new lighting approach that extends the Light Pre-Pass idea. It is called Inferred Lighting and it was presented by Scott Kircher and Alan Lawrence from Volition. Here is the link
http://graphics.cs.uiuc.edu/~kircher/publications.html
They assume a Light Pre-pass concept as covered here on this blog with three passes. The geometry pass where they fill up the buffer, the lighting pass where light properties are rendered into a light buffer and a material pass in which the whole scene is rendered again, this time re-constructing different materials.
Their approach adds several new techniques to the toolset used to do deferred lighting / Light Pre-Pass.
1. They use a much smaller G-Buffer and Light buffer with a size of 800x540 on the XBOX 360. This way their memory bandwidth usage and pixel shading cost should be greatly reduced.
2. To upscale the final light buffer, they use Discontinuity Sensitive Filtering. During the geometry pass, one 16 bit channel of the DSF buffer is filled with the linear depth of the pixel, the other 16 bit channel is filled with an ID value that semi-uniquely identifies continuous regions. The upper 8 bits are an object ID, assigned per-object (renderable instance) in the scene. Since 8 bits only allows 256 unique object IDs, scenes with more than this number of ob-jects will have some objects sharing the same ID.
The lower 8 bits of the channel contain a normal-group ID. This ID is pre-computed and assigned to each face of the mesh. Anywhere the mesh has continuous normals, the ID is also continuous. A normal is continuous across an edge if and only if the two triangles share the same normal at both vertices of the edge.
By comparing normal-group IDs the discontinuity sensitive filter can detect normal discontinuities without actually having to reconstruct and compare normals. Both the object ID and normal-group ID must exactly match the material pass polygon being rendered before the light buffer sample can be used (depth must also match withinan adjustable threshold).
During the material pass, the pixel shader computes the locations of the four light buffer texels that would normally be accessed if regular bilinear filtering would be used. These four locations are point sampled from the DSF buffer. The depth and ID values retrieved from the DSF buffer are compared against the depth and ID of the object being rendered. The results of this comparison are used to bias the usual bilinear filtering weights so as to discard samples that do not belong to the surface currently rendering. These biased weights are then used in custom bilinear filtering of the light buffer. Since the filter only uses the light buffer samples that belong to the object being rendered, the resulting lighting gives the illusion of being at full resolution. This same method works even when the framebuffer is multisampled (hardware MSAA), however sub-pixel artifacts can occur, due to the pixel shader only being run once per pixel, rather than once per sample.
The authors report that such sub-pixel artifacts are typically not noticeable.
3. The authors of this paper also implemented a technique that allows to render alpha polygons with the Light Pre-Pass / Deferred lighting. It is based on stippling and the usage of the DSF filtering.
During the geometry pass the alpha polygons are rendered using a stipple pattern, so that their G-Buffer samples are interleaved with opaque polygon samples.
In the material pass the DSF for opaque polygons will automatically reject stippled alpha pixels, and alpha polygons are handled by finding the four closest light buffer samples in the same stipple pattern, again using DSF to make sure the samples were not overwritten by some other geometry.
Since the stipple pattern is a 2x2 regular pattern, the effect is that the alpha polygon gets lit at half the resolution of opaque objects. Opaque objects covered by one layer of alpha have a slightly reduced lighting resolution (one out of every four samples cannot be used).
I actually just finished watching the video/reading the paper and this excites me greatly.
ReplyDeleteI have been messing around with mixed resolution rendering, and the edge problem is a large one. NVIDIA proposed stenciling edges, and re-drawing only to the stenciled areas, but that has several problems. Firstly if there is lighting going on, there will be a visible difference between the low res and high res. To combat this I have been experimenting with drawing light volumes to a single-pass dual paraboloid. This provides a low-res data source for both the high res and low res renders to sample from to get lighting data. This works really well for things like small particle systems, but not so much for larger ones. Another problem with re-drawing edges is having an additional draw call over the mixed-res geometry.
I do not think that normals need 16 bits of storage per channel, but I have not tried their storage format. I am currently really hot for the Lambertian Azimuthal Equal-Area Projection. The encode/decode is cheap, and it appears, from looking at the Earth projected using this method, that it will be possible to do blending of normal data as well as in XYZ for things like decals due to the properties of the projection.
I am really impressed by their discussion of platform specificity. The Int16 EDRAM format issue, and half pixel offset. All in all, awesome paper/video that I need to re-read/watch :)
Hey Wolf, any details of your talk available?
ReplyDeleteI will make my SIGGRAPH talk available soon ...
ReplyDelete@Pat: sorry to hijack the topic, but I've just added scaled Stereographic Projection to my encoding normals article: http://aras-p.info/texts/CompactNormalStorage.html
ReplyDeleteIt is a good contender to Lambert Azimuthal method (quality slightly higher, shader ops slightly cheaper). Just so you know :)
Did they mention issues with detailed normal maps: for example, a skin shader would have multiple normal map layers in order to simulate the skin's grain. With a sub-sampled lighting buffer, such a grain would possibly disappear and I was pretty curious to know if they had problems with that.
ReplyDeleteBenjamin
@Aras: I am still undecided. I think I like the properties of Lambert's projection better, but I need to test further.
ReplyDeleteThis technique seems very neat, and i specially like idea for blending alpha, as it seems very usable for water and glass overal, which are very often troublesome. I'm not sure it would work as well for particles (fire/smoke/clouds/etc) though.. or maybe it will still work reasonably well since there's no z-ordering issues?
ReplyDeleteHere's another thought.. please correct me if i'm wrong! i assume you have to still turn off depth writes when rendering alpha in the geometry pass, right?
ReplyDeleteThe way I understand stippling is that you can only cover four levels of transparency, so particles will probably not work very well but it works on all the things where you only need one level of transparency like car windows, bus stops with glass and obviously the cockpit glass of an airplane etc..
ReplyDeleteI think you have to turn off rendering geometry in the material pass by not rendering alpha geometry into the depth buffer ...
... it is actually three levels of transparency :-)
ReplyDeleteWhere to put the noraml group ID? If we save it to vertex data, then the vertex on the discontinuous edge has to be split for two IDs.
ReplyDeleteYes. But that's what you want. Vertices on discontinuous edges need to be split anyway, because (by definition) they have different normals.
ReplyDeleteBrian are right. I forgot that. Another question, what's the format of the GBuffer?
ReplyDelete- RGBA16F. This format suits normal.xy and linear depth well, but the DSF ID is unsigned short. Although DX10 has asuint() and asfloat(), for DX9 and OpenGL, may it be saved as (objectID + normalGroupID*0.001)?
- RGBA16I. With this format, normal.xy and linear depth have to be converted to unsigned short. Beside the precision problem, the convertion at each pixel when writing and reading GBuffer may be a big cost.
Ok, I just use two texture of a FBO in GBuffer pass:
ReplyDelete- RG16F for normal.xy.
- RG16I for DSF.
The result is great:
Lighting without normal group id:
http://i724.photobucket.com/albums/ww246/LangFox/without_normal_group.jpg
Lighting with normal group id:
http://i724.photobucket.com/albums/ww246/LangFox/with_normal_group.jpg
Seems like an interesting algorithm, although to me it seems unacceptable to downscale the screen space normal map resolution. It would explain why the demo video looked a bit flat.
ReplyDeleteThe DSF filter is done well and I like the way they treat lit and shadowed transparency in the same way.
This seems like a refreshing solution since I have massive geometry complexity which does not sit favourably with lighting prepass.
ReplyDeleteHowever, I don't like being limited to 256 on screen objects, and although ID's can be shared there is the possibility of blurring artifacts when these objects overlap.
It could be said that implemented their paper... but with a personal touch :P. I´m using forward shading atm, but writing it to a half resolution light buffer (along with shadows and ssao). Then I blur this light buffer with a 2x2 filter. Finally i render the full resolution normals and depth, and perform a bilateral blur on the light buffer, gathering light from it only when a weight calculated from normal and depth difference does not exceed a threshold. No material IDs like they do and no problems with detailed normal maps since the normal map is full resolution. However the lighting calculations are performed at lower resolution, and since you´re blurring it, adding shadowmaps and ssao to the mix yields artifact-free soft shadows.
ReplyDeleteI like the way they treat alpha polygons, though. The only problem is the 3 levels of transparency :(.
Re: what's the format of the GBuffer?
ReplyDeleteI'm using 2xRGBA8
The normal is encoded into 16 bits, depth into 24 bits, and then 24 bits are left over for DSF IDs and specular power.