I am thinking a lot about Crytek's Screen-Space Ambient Occlusion (SSAO) and the idea of extending this into a global illumination term.
When combined with a Light Pre-Pass renderer, there is the light buffer with all the N.L * Att values that can be used as intensity and then there is the end-result of opaque rendering pass and we have a normal map lying around. Doing the light bounce along the normal and using the N.L*Att entry in the light buffer as intensity should do the trick. The way the values are fetched would be similar to SSAO.
Take a look at that : http://www2.imm.dtu.dk/visiondag/VD08/posters/pdf/Visionday-DynamicIndirectLighting.pdf
I've never really been convinced by the idea of gathering indirect light from the screen buffer. In figure 4 of that poster, if the camera moves to the right, the bounced blue light will disappear from the dragon as the blue wall is no longer rendered. Depending on how far you're casting your rays out, I'd imagine that change could be quite abrupt.
That said, I've never tried it and similar issues exist with SSAO but that doesn't stop it being an overall win in adding interest to a scene. :)
Hi :-) ... bakura is that Bart Sekura?
I hope you are great. This really looks like what I had in mind. I have a tendency to come up with ideas one or two years too late :-)
Hi stuart yarham,
yes your are right. What I like about those approaches are that you do not have to stream in anything. If you have large worlds, this is always a win ... but then on the other side it is so prohibitively expensive that you really have to think twice before you use it.
Bart Sekura ? I really don't know who is it :D.
I think this paper is from 2008, so you are late of less than 1 year :p.
As stuart yarham, this technique seems to have a lot of limitations, I have already thought of that problem :/.
Do you know pixel correct shadow?
It generates shadow partly frame by frame with screen buffer for accumulation.
Using similiar way, I think that screen space radiosity could be.
I am actually the author of:
It's a part of my master-thesis which I'm currently finishing up at DTU (Technical University of Denmark).
Since my poster was presented at the Vision Day at DTU, I have further improved my method and corrected some visual errors. When I am finished with my thesis (in a month or so) I can publish it if you're interested :)
Wolfgang Engel: If you would like to further discuss this topic, feel free to email me.
Hey Mikkel, I sent you an e-mail :-)
And what about the problems stuart yarham was talking about in the first post ?
This is the challenge .. we just need to solve those :-)
I'm having a hard time understanding why so many people dislike screenspace-methods. Granted, they are view-dependent which mostly leads to two kinds of problems: Noise and lack of data to solve certain problems.
The noise is in my method easily removed by blurring the indirect term while taking the depth-gradients into consideration.
I have addressed this in my latets improvements. This is a valid solution since indirect lighting is a low-frequency function and by blurring this signal, peaks are effectively smoothed out.
The lack of knowledge to solve a given problem is in my case a question of not being able to gather direct illumination from behind visible objects. Again this is rarely noticable since of its low frequency nature. If however you really wanted to squeeze more realism out of this model you could (and I will address this in my thesis as well) sample a back-face view of your scene. This ofcourse only gives you one more layer, but will in most cases result in very near ground-truth images.
To answer Stuart Yarham's question: Yes, changing the point-of-view results in a different global illumination-solution, but this is for the most cases not noticable since it's a low-frequency function. Worst case is ofcourse when the view-rays are parallel to i.e. the blue wall, but human vision tends to only take notice of color-bleeding when both the caster and receiver is visible ;)
When working with dynamic scenes screenspace-methods can be great tools to approximate many of the otherwise offline methods.
After reading these comments I really want to read the paper, but following that link I just get a 'Not Found' message. Where can I find that paper?
I have asked the administrators at DTU to remove my poster, since my supervisor thinks that I should publish a paper on the subject and as a result of that, I shouldn't have anything related lying around online :)
Mail me your email-address though ;)
I haven't read your paper, so my comment might not make sense. Could you use dual parabaloid textures of the scene? Maybe you have two textures that represent the entire scene instead of a standard frame buffer. That might allow you to take care of all those edge cases.
It seems that the Crysis guys already experimented with that idea, too (but scrapped it). Source: Crysis demo shader files (search for SSIL).
After reading this post I've posted a couple of shots of my (hacky) "SSGI" implementation.
Nothing special, but it's a start...
Hey Dr. Kappa,
this looks already great :-)
Thanks for the link from you blog.
I talked to Carsten Dachsbacher about this last week. He mentioned that reflective shadow maps is a kind of light-space ambient occlusion. I think light-space is probably easier to use here.
This sounds like a great idea, has anyone got any further with it.
I have now almost finished my Master's thesis and a visual overview of my method can be seen here:
Mikkel, is there a way to see a screenshot of your SSGI implementation?
I've modified mine, there's a post on my blog. I'd love to see how your implementation looks.
Post a Comment