Thursday, June 27, 2013

Depsgraph - ID Groups

In this post, we take a second look at one of the issues from the Granularity Problem: that of “strongly connected components”. That is, a set of nodes which, for all intents and purposes end up referring to each other.




Some Theory – Dependency Cycles and Strongly Connected Components

If you'll recall from the Granularity Problem, we often have phantom cycles when it appears that two or more objects are interdependent on each other (i.e. they each rely on each other in a bit of a “mexican standoff” type of situation), thus forming a cyclic dependency between them. Groups of objects in such relationships are basically “strongly connected components” (loosely speaking). The problem with such groups of objects though is that when you come to try and find a way to order them, you can't really find any ordering which will truly satisfy all the dependency relationships, especially if you only want to evaluate each item once.
Let's clarify this a bit with a few examples:
  • A → B → C” – In this standard case, we 3 nodes that form a normal dependency chain. We can read this as: “C depends on B, which depends on A”. To evaluate this, we simply start from a node which doesn't have any dependencies (i.e. “A”) and evaluate that, then we look at its descendents (or outlinks) and evaluate whichever one of those whose dependencies (or inlinks) have already been evaluated (i.e. “B”), and so on until the graph has been evaluated. The important thing to note here is that when we have evaluated the final node (i.e. “C”), the state of all the other nodes is now valid and/or hasn't changed since they were last evaluated. As such, this particular graph is called a Directed Acyclic Graph (DAG).
  • A → B → C → A → ...” - This however is an example of what we call a “cyclic” graph. If we go about evaluating this by starting at node A (or in fact any of the other nodes), we will eventually get back to that same node again. To prevent getting stuck in this forever, we could say that we stop as soon as we encounter a node that we've seen before (i.e. this is what Blender does now). However, since the nodes all depend on the results of each other, by doing so you've now violated one of the relationships in the graph by breaking one of those links. This is what causes the infamous “lagging by one frame” errors (i.e. after undoing some action when dealing with a rig with cyclic dependencies, you need to change frames backwards then forwards again to get things looking somewhat normal – though not entirely!) that can be seen in Blender these days.

As you may know from experience, after running through such cycles a few times (i.e. by frame stepping, as mentioned before), the relationships generally tend to “converge” to a point where they are in near equilibrium (i.e. all relationships are balanced out so that things don't change too much – or at least to a largely imperceptable level – between runs, even if some of the individual links in the chain aren't exactly obeyed fully).

So, the question now becomes: Since we can't find any single-pass “linear” method to reliably and correctly do this, how can we perform operations on these cycles in at least some form of “controlled” manner?

Isolating Strongly Connected Components

Perhaps one of the first things we need to do in order to manage these beasts is to realise that, in order to enable some degree of control over the cycle-evaluation/resolution process, we must identify and separate out these nodes from them overall dependency graph in some way. Indeed, our current dependency graph implementation does this, and the implementation docs also mention this (see fig2 in particular). However, where we differ from that implementation is in what we do once we find those components.

Let's look at another example.
1. Here we have a standard, data-orientated depsgraph with coarse-grained granularity. Note how nodes A and B in this state look like they refer to each other, forming a cyclic dependency between these two nodes.



2. Let's round up these interdependent nodes, and replace this with a single node – the “AB Complex” – representing this cluster of closely related dependencies, thus isolating and grouping them together for further analysis and handling:



3. With these nodes gathered together, we now know that they are in a tight relationship with each other, when viewed from the current granularity. This could mean that either there is a phantom cycle or a genuine cycle. In any case, we've isolated these from the general graph, so at worst, we only have to cycle (or evaluate) this sub-graph several times to get it working (i.e. we've got a smaller problem than before). Since most of the problems we've been having are really just phantom cycles – where from user perspective there are no cycles, but from Blender's perspective there were due to coarse granularity – we can apply our little “explode then interleave” trick to see if we can create a little sub-graph that neatly eliminates the whole problem (but only if we remember that this set of objects will have to be evaluated together).




A Side Note about Implementation Concerns and the Generality Goal

This method of isolating out little sets of tightly-coupled entities is one of the reasons why we need to get rid of the current set of handler-functions like object_handle_update() (and the loopers on scene-level which call this). Although these types of groups act somewhat like replacements of those (except that instead of iterating over objects at toplevel, we iterate over these groups, which then dispatch out evaluation steps), they also have the potential of being more general than those old static functions.

That is, any cluster of ID-Blocks can be bolted together like this, to be evaluated as their interdependencies dictate, instead of only by a fixed order (which sometimes screws up). Thus, it seems that these groups can also extend upwards, and also cover the “Generality” goal – evaluating the current application state becomes merely a problem of figuring out how to schedule up these groups (which are wired together so that their dependencies will be evaluated in a way that doesn't result in phantom cycles).

Groups within Groups...
Some of you may be wondering at this point what happens if for whatever reason there ends up being yet another granularity/phantom cycles problem between groups (due to the way they were partitioned).

On one hand, these situations shouldn't happen at all, since as soon as two coarse blocks tightly interdepend on each other, we simply clump them together into one group, and then hook up their components within that group's evaluation graph so that they don't have any problems. Here, blocks can be ID-Blocks, but could also potentially be between an ID-Block and a group, in which case the ID-Block gets folded inside the group.

On the other hand, there's still a conceptual advantage/simplicity to merely restricting these situations to groups of ID-Blocks. However, if we had a tiered system like this, we'd eventually need to flatten them out somewhere to yield the flattened graph. Having said that, we'd get benefits in terms of being better able to sub-clusters within a group, which can further be treated semi-autonomously (i.e. perhaps more opportunities for topology-driven parallelism here).

Size of Graphs
The other concern perhaps is that these groups end up in rather large graphs. While it's true that we could end up with perhaps one or two really massive graphs corresponding to these groups and then a few smaller ones for the leftovers, at the end of the day, we're still breaking down/reducing the size of the problem, even if not by very much!

What about leftover ID-Blocks
I briefly mentioned “leftover” ID-blocks earlier. Ultimately, I think we're going to end up with a few of these (or perhaps many in simple acyclic situations). For these, they'd still end up as single-entity groups. This would result in a composite-like situation (though slightly weird).

ID Groups

Having talked for a while about these “groups”, what exactly do I mean? The following diagram aims to show the main components that we have:




In short, each group consists of 3 sections or components:
  1. Headliner Section – This basically keeps track of the ID-Blocks which get clumped under the group, and allows easier external queries about “data”. Each ID-Block assigned here gets a little internal identifier used for tagging the nodes that it contributes to the Eval Graph, which aids in helping to figure out which of these need updating when changes occur.
  2. Eval Graph – This is basically the graph of atomic nodes that need to get evaluated for this group. Atomic “operations” for the ID-Blocks defined in the headliner section.
  3. Cyclic Resolution Component – This basically controls the process for resolving “unsolvable cyclic dependencies”, by running the Eval Graph numerous times (up to the limit defined by users) to hopefully get the graph to converge. Other extensions can plug into this too... (more details in a later post)

Refer back to “What's in a Node? → Split-Hierarchy Model” for how this corresponds with the model presented there.

No comments:

Post a Comment