http://www.microsoft.com/downloads/details.aspx?FamilyId=32906B12-2021-4502-9D7E-AAD82C00D1AD&displaylang=en
I thought I comment on those slides because I do not get the main idea. The slides mention a combinatiorial explosion for shaders. They show on slide 19 three arrows that go in all three directions. One is called Number of Lights, another one Environmental Effects and the third one is called Number of Materials.
Regarding the first one: even if one has never worked on a game, everyone knows the words Deferred Lighting. If you want many lights you want to do the lighting in a way that the same shader is used for each light type. Assuming that we have a directional, point and spot light this brings me to three shaders (I actually use currently three but I might increase this to six).
One arrow talks about Environmental Effects. Most environmental effects nowadays are part of PostFX or a dedicated sky dome system. That adds two more shaders.
The last arrow says Number of Materials. Usually we have up to 20 different shaders for different materials.
This brings me to -let's say 30 - 40- different shaders in a game. I can't consider this a combinatorial explosion so far.
On slide 27 it is mentioned that the major driving point for introducing OOP is the dynamic shader linkage. It seems like there is a need for dynamic shader linkage because of the combinatorial explosion of the shaders.
So in essence the language design of the HLSL language is driven by the fact that we have too many shaders and someone assumes that we can't cope with the shear quantity. To fix this we need dynamic shader linkage and to make this happen we need OOP in HLSL.
It is hard for me to follow this logic. It looks to me like we are doing a huge step back here. Not focusing on the real needs and adding code bloat.
Dynamic shader linkers are proven to be useless since a long time in game development; the previous attempts in this area were buried with DirectX 9 SDKs. The reason for this is that they do not allow to hand-optimize code which is a very important thing to do to make your title competitive. As soon as you change one of the shader fragments this has impact on the performance of other shaders. Depending on if you hit a performance sweetspot or not you can get a very different performance out of graphics cards.
Because the performance of your code base becomes less predictable, you do not want to use a dynamic shader linker if you want to create competitive games in the AAA segment.
Game developers need more control over the performance of the underlying hardware. We are already forced to use NV API and other native APIs to ship games on the PC platform with acceptable feature set and performance (especially SLI configs) because DirectX does not expose the functionality. For the DirectX 9 platform we look into Cuda and Cal support for PostFX.
This probably does not have much impact on the HLSL syntax but in general I would prefer having more abilities to squeeze out more performance from graphics cards over any OOP extension that does not sound like it increases performance. At the end of the day the language is a tool to squeeze out as much performance as possible from the hardware. What else do you want to do with it?
3 comments:
I think what they are trying to do is develop a language that can be used for more generalized programming for the GPGPU, and not necessarily focussed strictly on game development pixel/geometry/vertex shaders. They even complicate the pipeline even more by adding in two more types of shaders to handle tessellation, but oh well! Personally, I'm more in favour of CUDA (For NVIDIA only) and OpenCL then I am of this Compute Shader model.
Ha! I love it.
No offense but just because you don't find the use for some feature of the API doesn't mean someone else won't.
I've seen many cases where a uber-shader is compiled x number of times with different preprocessor options and x was well into the hundredth.
In this case concurrent compilation and dynamic linkage cuts down the time and effort needed to compile the shaders drastically.
There is also a very important concept here that is missed: maintainability. Shaders nowadays are just very big C programs that can be a nightmare to debug/maintain (even with the help of PIX & co). With this OOP approach, we would be able to break down the shaders into multiple abstract chunks that deal with a specific functionality (lighting, environment etc...).
The main shader could be just a shell to host those interfaces and each interface could be independently implemented and tested.
I agree it does not bring up performance, but it does not hinder it either. The linkage resolution is performed on the assembly code so it should still be fast.
I'd rather rely on stable code that runs fast than buggy code that runs faster.
Post a Comment