OK, a few thoughts on UI/UX and architecture while I’m still collaring every game developer I know to find out how the OpenGL pipeline actually works. Your patience is appreciated.
BEER is just semi-intelligent sugar for GLSL. The BEER interface is ultimately a visual programming interface for GLSL. Any given node is effectively a stand-in for a function with inputs and outputs. Before we can turn the nodetree into GLSL there’ll be check time (making sure the node tree is sane and reporting to user if it isn’t), compile time (turning the node tree into GLSL) and run-time (running the GLSL). The node system needs to be smart enough not to try to compile something that will fail, but not so locked down that it prevents people from experimenting randomly.
Instant feedback is a must. Any such system (nodal or layer-based or whatever) should offer instant-as-possible feedback from user action to end result… Existing examples of this in Blender are Cycles preview render with the material node editor open in another panel, BI’s preview pane, and the Viewer node/Backdrop in compositor. These are all existing UI patterns within Blender which allow for easy experimentation and troubleshooting.
A running GLSL shader isn’t completely static. In GLSL it’s possible to send new information to the shader while it’s running and after it has compiled - info like colours, specular hardness, mix values, etc. This aligns somewhat with being able to drag particular values to input when using a node group. Obviously whenever the code structure changes, the shader will have to recompile. But this should allow realtime feedback for certain things.
A good interface ultimately describes what the user is thinking, not the underlying code. Artists who are used to Photoshop are accustomed to building stuff up in layers and applying effects over the top. Even in Blender’s compositor, something like Gaussian Blur is a processing operation. Doing a blur in GLSL may not be what you’d expect - for instance, performing a Gaussian blur involves processing the input geometry through the vertex shader and the fragment shader. Good UX should abstract that quirkiness away so that the user doesn’t need to deal with it unless they want to. Something like blur should be presented to the user as a processing step, regardless of whether it’s a combined effort between the vertex shader and fragment shader, because as far as the user is concerned blur is a processing step. Ditto stuff like texture mapping - it should “just work”.
Power to those who can use it, managed simplicity for those who don’t, tweakability for in-between. There’s a world of complexity in GLSL which an artist doesn’t necessarily want to deal with for every single shader they build. On the other hand, you don’t want to take power away from people who know how to wield it (or people who are interested in finding out). So ready-made ubershaders as well as a power-user GLSL node (analogous to the Script/OSL node in Cycles) should both be a given from the outset. For the sake of user-friendliness, all ready-made nodes should have implicit defaults for things like normals - unless there’s something explicitly connected to a node input, they use something sensible at compile time. If a user needs to tweak a ready-made, they should be able to drill down a level - whether they’re drilling down to a node group or directly into GLSL. If the user wants to go all the way and “bake” a node group to a GLSL script in order to optimise the code by hand, warn that there’s no going back then let 'em.
Optimised GLSL will almost always outperform GLSL constructed from prefabs. Until it’s been through optimisation of some sort (hand-tooling, compiler optimisation, etc), the shader’s probably not going to run as quick.
GLSL has its limits. Given we don’t always want to do stuff like glow and blur the hard way through GLSL, it would be absolutely awesome if the BEER system could output to multilayer OpenEXRs with info like Z buffer, alpha channels, movement vectors and other useful stuff as well as the usual RGB and Freestyle layers. (I tweeted a question to psy-fi about the feasibility of this and he says it’s doable.)
Architecturally, I’m not as certain how this all works but I’m learning.
I’m still finding out about it from my game dev friends but it appears that most of what BEER wants is accomplished in the fragment shader, and some of it comes from the vertex shader in a way that mostly doesn’t need to be presented explicitly to the user, e.g. blurs and glows.
One thing I’m curious about is how the different shaders interact - at some point they all have to combine together to make an image, so I’m guessing there’s a main() function which calls the material shaders, asks them in turn “what happens at these pixel co-ordinates?” for each coordinate on the screen, then combines the results.accordingly. Let’s call this the composite function.
When they’re called, the material shaders in turn ask the vertex shaders about vertex information, normals, UV coordinates, etc, asks the scene about lighting, other variables of interest, etc. So we want to be able to grab input from the vertex shader as well as from the scene. (We also might want to ask Blender about what other objects are doing as well, something we can’t currently do within materials until composite time. If we want to do a glowing object which gets occluded by a non-glowing object, all within GLSL… yeah.)
Some stuff can be be pre-computed - info from the vertex shader that represents what the camera can see, for instance.
For flexibility, I’d want to be able to output not one but multiple different image outputs per material which contain RGB, alpha, z-buffer or whatever else we want to send to the composite function. When an image output is created in a material, maybe under the hood it goes into a registry of functions which can be used for final composite. The functions would need to present some sort of signature so the compositor knows what information it can extract. Once you hit a composite node tree, all the image outputs are there as nodes and you can combine them together how you like. This aligns with Blender’s material -> composite workflow pretty well.
When it comes time to actually compile the GLSL, the naive version of the algorithm doesn’t seem to be super-difficult. Every single node represents a GLSL function. If a node takes input from another node, it calls the function of that input node. If a node has an input value, we can treat that either as a constant or a variable. Start at composite, walk back dropping more and more functions into the GLSL code with every new node we need information from. Whatever isn’t connected back to the composite node along some path doesn’t get included. Then compile, run, cross fingers. Obviously there’s more optimal ways to do this, but optimisation comes later. And effects like glow and blur need to get trickier - possibly by following the node chain back to where it talks to the vertex shader and quietly putting the appropriate calls in.
Something my gamedev friend pointed out is that the shading system needs to draw stuff in a specific order for stuff like reflections, object-as-light-source, etc. That’s probably beyond the scope of BEER for now, going off the primitives you’ve listed in that other thread.
How does this align with what you’ve got?