Sharon describes a system of Constellation Models for Sketch Recognition. Constellation models describe sketches by the relationships between various objects. The objects themselves are recognized using very simple features: the bounding box position, diagonal length, and angle. Interactions between objects are also described with very simple features: the distance between the centers of the strokes, the shortest distance between any two endpoints and the shortest distance between any two points.
The feature vectors are used in a ML search through the space of possible labellings to find the assignment of labels to strokes that maximizes the probability of the labeling being correct given the training data. The search is broken into two steps: required components and then optional components, which drastically improves the search time. Separating the search into two steps drastically reduces the search space, by reducing the potential number of labels and also the number of candidate strokes.
I think this is a really neat approach. With respect to the current assignment, it would be pretty cool to break course of action diagrams into components like this. For example, to identify the echelon modifier and the strength modifier.
This approach is also somewhat similar to LADDER in that it is very important where objects are in relation to the other objects in the sketch. For example, a right eye will never be on the wrong side of the left eye. These constraints are not modeled explicitly, but are learned from the training data. The constellation model only understands positional relationships, as opposed to LADDER, which can model more complex relationships like parallel-ness.