-
Notifications
You must be signed in to change notification settings - Fork 3
Render Pipeline Refactor
To disentangle our pipelines and make maintanence easier, we need to be able to configure pipelines piece-wise, and reuse them in other places. To support that, we can add input and output nodes, which are just placeholders for color or DS targets. When saving the pipeline, these nodes are used to attach meta data to the pipline asset to define a list of input and output pins.
To instantiate this in another pipeline, we can add a "pipeline" node, which references a pipeline asset. Using the asset meta data, that would then dynamically adjust which input and output pins it has.
The construction of the final pipeline can be done entirely at runtime.
Some passes have properties that we want to adjust dynamically. Our volume components do this, but they are tied to pipelines. It would be much more powerful, if pipelines could simply read from (global) blackboards. Then users would have much more control, for instance a time-of-day cycle could be done through scripting. The volume components are still useful, since they do automatic blending, but they don't need to modify pipelines directly, they could simply modify values in a blackboard. Then they could also affect very different things (e.g. gameplay).
Render pipelines can only use global blackboards, since they are not part of a world. They only need to read number, bool and color values. We should add blackboard nodes to the pipeline asset to read a value and connect it directly to a property pin on a pass. It is unlikely that we need any kind of math or condition nodes for this. If there is any such logic necessary, it can always be done outside the pipeline.
The runtime pipeline only needs to know which blackboards it generally reads from. Every frame it checks the "change counter" and if any value changed, it updates its entire state.
In the future it would be very useful if we could merge the main and the editor pipeline, they differ only very slightly. It would also be beneficial, if the pipelines could switch between different SSAO implementations, have ways to toggle features, such as SSR on and off, etc, and be able to do that dynamically at runtime.
Currently we have the concept of deactivating a pass, however, this has never worked well, and I suggest we remove that feature entirely.
Instead we should introduce a "switch node". The swith node takes a render target as input and outputs it to one of N nodes. All nodes that don't get the output are treated as if they are not part of the pipeline, at all. This allows us, for instance, to activate one of multiple SSAO implementations, or to skip an entire pipeline pass.
Which path to take is determined by a number pin that reads from a blackboard. Switches, and thus the structure of a pipeline, have to be evaluated at runtime. When a switch happens, it is instant from one frame to the next. There won't be any blending. This is exclusively meant to configure the pipeline, it isn't meant for runtime effects. In theory you could use it to switch between something like regular / night vision, but if you need any kind of transition effect, that would need to be taken care of differently.
Additionally we may need a "reverse switch" node, that takes multiple render targets as input and outputs one of them. This is useful for switching between different render target formats (e.g. 8 bit vs 10 bit), while the rest of the pipeline continues operating on whatever makes it through. Actually having only this reverse switch node might make everything easier, because you can just walk from the Target Pass backwards and see what you reach. But that's TBD.
Extractors are currently nodes that have an effect by just being part of a pipeline. It is unclear whether we need to be able to deactivate extractors. As far as I can tell, as long as no pass uses some data, the extractor has no effect (?). Extractor configuration may need some discussion. E.g. what happens if an extractor is mentioned multiple times, due to sub-pipelines?
I have only glanced shortly at the current code and can't say much about its state. In the editor, the render pipeline UI was the first ever code that implemented node graph functionality. By now we have much more functionality. It was my impression that the old code isn't particularly clean anymore.
At runtime, the question is, whether one wants to work directly on a 1:1 representation of the editor graph, and e.g. implement node switching while executing the pipeline. Another option would be to have two representations, the configuration graph, that evaluatess the blackboard and switch nodes and then builds a final "execution pipeline" that contains exactly the steps that are actually taken. This might be easier to analyze for dependencies. However, since property values can change every frame (fog color fading), the question is whether to rebuild this structure every frame, or whether to dynamically update its properties separately.
Overall, I have the feeling that it might be easier to build a completely new render pipeline + UI in parallel and not touch the old one, until it is fully functional, and then switch over. That would allow us to try out some things first, before fully committing to it. This would probably also mean to reimplement all existing passes. While at it, we should review their properties. There are also weird passes like the EditorPickingPass, that has no input or outputs and therefore has no obvious point in time where it is executed or on what render targets it even operates.
An ezRenderGraph encapsulates a large sub-set of a rendered frame as a sub-graph. It can have inputs and outputs and will usually contain many ezRenderPass and ezComputePass instances. The inputs and outputs of the ezRenderGraph form another graph that forms the render dependencies of an entire frame.
To not overcomplicate things, ezRenderPass and ezComputePass are executed in order of addition to the ezRenderGraph. Rebuilding the graph of passes is expensive in the ezRenderGraph so ideally this should not be done regularly but it should still be real-time compatible on smaller graphs.
We have to decide how to handle multiple ezRenderGraphs in a frame. We could work on each separately but the main benefit of the graph is to do optimal barriers / layout changes so on the boundary of the graphs we need to stich some stuff together. This could be automated by a higher level graph or we make sure that inputs / outputs of graphs need to define their target use and the user has to make sure to set it correctly when creating the graphs.
Each render pipeline would create its own ezRenderGraph. E.g. the render graph for a shadow map will have the shadow map as output and the main view render graph will have the same shadow map as input.
Each ezRenderPipelinePass in the pipeline will produce an array of ezRenderPass / ezComputePass instances.
Currently, the render pipeline is only rebuilt if marked as dirty but it is unclear what changes would mark a pipeline as dirty. Assuming we go the route of creating a blackboard for the parameters of a pipeline, there are three scenarios:
- A simple parameter in a constant buffer is changed, eg. SSA strength. This just requires the SSAO pass to read ne new value 'somehow' and apply it to the constant buffer.
- The SSAO quality level is changed which requires the SSAO pass - and only the SSAO pass to re-create its render passes which requires a rebuilt of the render graph.
- The SSAO implementation is switched using a switch node. This requires rebuilding the high level render pipeline pass graph and then rebuilding out of the remaining nodes the render graph.
There are multiple open issues:
- How do we differentiate between the three cases if we make a change to a parameter? I guess we can differentiate a full rebuilt of the pipeline by looking at just changes to switch nodes if we decide that's the only way to change the flow in the graph. For the later two cases it's a bit more difficult but we could just allow a node to call a function on the graph to inform it that it has made a change that requires re-evaluating its passes?
- How do we pass values from the backboard into a pass? Right now we match the nodes by name and pass in the values via key-value mappings on the reflected values but if we want to pull instead of push the value we need to somehow wire up the blackboard parameter with the reflection. This could be done by exposing reflected properties besides the nodes in the editor node graph but that could lead to a lot of exposed input pins of a lot of parameters are exposed.