The Killzone 2 team came up with an interesting way to use MSAA on the PS3. You can find it on page 39 of the following slides:
http://www.dimension3.sk/mambo/Articles/Deferred-Rendering-In-Killzone/View-category.php
What they do is read both samples in the multisampled render target, do the lighting calculations for both of them and then average the result and write it into the multi-sampled (... I assume it has to be multi-sampled because the depth buffer is multisampled) accumulation buffer. That somehow decreases the effectiveness of MSAA because the pixel averages all samples regardless of whether they actually pass the depth-stencil test. The multisampled accumulation buffer may therefore contain different values per sample when it was supposed to contain a unique value representing the average of all sample. Then on the other side they might only store a value in one of the samples and resolve afterwards ... which would mean the pixel shader runs only once.
This is also called "on-the-fly resolves".
It is better to write into each sample a dedicated value by using the sampling mask but then you run in case of 2xMSAA your pixel shader 2x ... DirectX10.1+ has the ability to run the pixel shader per sample. That doesn't mean it fully runs per sample. The MSAA unit seems to replicate the color value accordingly. That's faster but not possible on the PS3. I can't remember if the XBOX 360 has the ability to run the pixel shader per-sample but this is possible.
12 comments:
I'm really confused by your comments. Why should the accumulation buffer have to be MSAA? It's not tied to the depth buffer (the G-buffers are). Even so, how does this make MSAA "less effective"? They're doing lighting per-sample, and then down-sampling. The fact that some samples pass or fail the depth test is accounted for in the fact that each sample will have different values in the G-buffers. That's sort of the point, right?
Hey Brian,
the depth buffer is multi-sampled and the accumulation buffer is part of the G-Buffer. At least it is shown as part of the G-Buffer in one of the slides.
<<<
The fact that some samples pass or fail the depth test is accounted for in the fact that each sample will have different values in the G-buffers. That's sort of the point, right?
<<<
You want the lighting also to be MSAA'ed, not just the color values.
I'm still missing some distinction, then. Imagine their lighting pass, being applied to two samples of a given pixel that straddle a polygon edge. They claim that they do the lighting per-sample, so each of those samples will reconstruct a different world-space position (based on the different values in the depth buffer). They will therefore receive correct lighting per-sample. The final results will then be blended together as part of the output. Isn't this (other than their admitted gamma issues) the same as writing out each of those sample calculations separately and then combining them during a separate resolve pass later?
In an ideal world you want to depth test per sample again before you write into the accumulation buffer.
... but by definition, every sample in your G-buffers MUST pass the depth test. It's the closest opaque surface in that location. The whole idea of the MSAA is to do your lighting at per-sample (sub-pixel) resolution, then combine the results to smooth out the discontinuities along triangle edges. What depth value would you be testing against where samples would be discarded? If you somehow managed to test against the closest (resolve depth with min?), then you'd effectively be discarding all of your MSAA work by ignoring all but the closest polygon within a given pixel.
Doing lighting on a per-sample level happens in the lighting stage, not in the geometry stage. The advantage of splitting up into a geometry and lighting stage needs to be bought by keeping the geometry data per sample and then doing the lighting per sample to reach the same quality as a forward rendered.
Yes, and that's exactly what they're describing on page 39 of the slides. They render out their G-buffers, which are MSAA. Then they do their lighting pass, which performs the lighting, and accumulates the results in the (presumably non-MSAA frame buffer). I'm assuming it's non-MSAA, because why else would they point out that they 'Run light shader at pixel resolution'.
During this lighting pass, they're doing the lighting TWICE per run of the pixel shader (once for each sample). Thus, their lighting is computed at per-sample resolution. Then they average those results, and accumulate them. It's only written out at per-pixel resolution, but that's fine. There's no more (meaningful) work to be done, so they might as well do the downsample during the lighting pass.
Ah, I just noticed one of the confusion sources. They do have a lighting accumulation buffer listed as part of their G-buffer configuration. I *think* that's not actually the buffer used for final accumulation (which they refer to as the frame buffer elsewhere). So their MSAA double-sample lighting thing doesn't apply to values being written into that buffer. That's only for their IBL and other effects?
Brian: I understand what they do. It is a way to improve the resulting image, but it doesn't give you the same MSAA quality as a forward rendered MSAA'ed image.
Averaging the samples is not correct but it is the best you can do on the platform.
We have a vaguely similar technique for deferred lighting with MSAA on the PS3, but we're using the SPU to handle it - we read both samples of each pixel from the MSAA'd normal buffer and calculate light per pixel when they're the same, and per sample when they differ.
see:
http://research.scee.net/files/presentations/gdc2009/DeferredLightingandPostProcessingonPS3.ppt
for more information. (apologies to those who have seen that before)
you might want to clean up these comments into a new blog post Wolfgang as right now it is very confused.
also - i do something different but similar to Matt's approach (I've had matts approach on my whiteboard for a month or so now)... i'll explain next time we talk.
Andy: I will follow up with a more extensive post at some point ...
It becomes obvious if you try to visualize a 4xMSAA'ed pixel in the lighting stage. Let's say three samples in this pixel use light 0 that is green and one sample uses light 1 with the color red. Averaging the shading won't end up in correct here.
Post a Comment