Modular Component-Based AI Systems

posted in GDC 2011
Published March 09, 2011
Advertisement
The GDC 2011 AI Summit opened up with three heavy-hitters from the AI world (Brett Laming, Joel McGinnis, and Alex Champandard) discussing the merits and motivations behind component-based architectures.

Although the term has gained popularity in recent years, and most people in the room expressed at least some familiarity with the concepts, there remains a substantial amount of uncertainty as to what exactly a component architecture entails. To address this, the lecturers presented an outline of how and why component architecture gained the spotlight in modern games engineering, and provided some tips and important rules on how to approach component-based designs.


Historical Trends
As object-oriented programming took hold and languages like C++ finally gained enough traction in the games industry to see widespread adoption, the typical design methodology involved creating rich, deep hierarchies of inter-derived classes. This quickly ran into issues such as multiple inheritance's "diamond problem," brittle structure, and questions of how to deal with "non-inheritance" situations (where some but not all functionality of a branch of the inheritance tree is desired in a particular leaf class).

One reaction to this was to push functionality towards the roots of the inheritance tree, essentially creating "fat base classes." This is even more problematic in a practical sense because bloat of code in the base classes decreases readability, clarity, and maintainability; simultaneously, bloat of data in the base classes led to immense memory wastage and overhead, which became thoroughly unacceptable as games grew in scale.

A more promising direction was to make the entire engine core highly light-weight and extremely data-driven, where virtually all of the behaviour and richness of the game simulation was accomplished in data rather than directly modeled in code. This approach still has its proponents, but suffers from a critical weakness: it lacks natural hooking points and specificity by which one can drill into the running simulation and inspect or modify its state. Put simply, offloading the complexity into data (away from code) deprives us of all the benefits of code-modeled introspection and manipulation.


Enter the Component Model
A central observation behind the introduction (and indeed the widespread adoption) of component-based architectures is that there are fundamentally four things in a simulation which need to be elegantly captured:

  1. Classification of entities (Is this a weapon? An item? A door? A sharp weapon? etc.)
  2. Key properties (How much damage does this weapon do? How much does it weigh? Which direction does the door open, and what key(s) does it require?)
  3. Defined mappings of inputs to outputs (Weapon damage values modify health values; keys modify door lock states; etc.)
  4. Interchangeability (Can I use this weapon in place of that one?)

Component architectures provide a modeling tool for all four areas; although other approaches can say the same, components provide a compact and highly elegant manner in which to reach these goals.

The main difference between the component mode of thought and older, less desirable approaches is the notion of systems. Indeed, it is worth noting that proper application of component architecture demands the use of systems richly; anything less will essentially collapse back into the same types of fat-architecture we were trying to escape in the first place. Moreover, in a systems-oriented model, granularity of functionality becomes desirable rather than problematic.

Systems are, fundamentally, the "glue" by which components are organized and compartmentalized. Moreover, systems formalize the interactions between components and other systems. This drives reusability in several key ways:

  • Inheritance can be used (sparingly!) to reuse logic and data relationships directly
  • The structure of interrelated components can be reused modularly
  • Data flow between components and systems can be interchanged as needed
  • Compartmentalization separates reusable elements into neat packages
  • As a bonus, parallelization can easily be accomplished between systems

Careful use of class inheritance, along with factory methods, serialization, and run-time type information (RTTI) frameworks, can provide a highly data-driven model without sacrificing the specificity and hooks of a richer code model. In addition, the deployment of systems can help identify dependencies and functional structure within the simulation itself, allowing for easier maintenance and iteration on existing code.

Another potential win of systems over gamegraphs and similar structures is the elimination of redundant searches. A system can keep track of all the components/entities which are relevant to that system directly, thereby avoiding the need to constantly traverse the game universe looking for those entities. This in turn provides a stark highlighting of the lifetime relationships between various entities, which can be a major advantage when it comes time to do dependency analysis on the simulation itself.

Last but not least, components allow for late binding and re-binding of type information. Have a set of logic that relies on park benches, which suddenly needs to be rewritten to use dumpsters instead? The code change amounts to tweaking a single "tag" within the appropriate system, rather than making large numbers of tedious and fragile changes to raw code dependent on the actual "park bench" or "dumpster" classes. The data-driven aspects of component architectures become a major advantage in this sort of scenario.

It is worth reinforcing the fact that component models are not "an architecture" but rather a paradigm in which architectures can be created. As with virtually everything in the engineering and architectural realms, the exact details will depend highly on the specific game or simulation we are setting out to make.


Parallelization
Component models can be a very powerful tool on modern platforms where concurrently-executing code is a central aspect of engine design. One important observation is that AI work (and indeed simulation work in general) essentially consists of reading and writing properties of entities in the simulation, and potentially rearranging the logical structure of those entities (moving objects, creating new NPCs, recycling old assets, etc.). Envisioning this as a sort of circuit diagram is a useful technique; data flows "down stream" between systems each frame. Any mutation of game state which can be passed down stream to later components can be accomplished using just the execution stack space, since later systems will always have access to that memory space safely. However, any "up stream" communication needs to be delayed by a frame by queuing a "message" which is read by the appropriate system in a subsequent tick. This decomposes nicely into a job/task system, which is a (deservedly) popular means of handling parallelism in modern engines.

As with any other parallelization tasks, a few fundamental rules apply:

  • Minimize the volume of data propagated throughout the system
  • Further, minimize the lifetime of any data that does need to be passed around
  • When possible, derive data rather than duplicating it; no need to store mass, volume, and pressure when any two will suffice
  • Locality of reference is key; custom allocation is, as always, a major win here
  • NULL checks can be eliminated by using dummy non-operative objects instead of empty pointers
  • Propagate RTTI information along with pointers in order to avoid duplicate virtual-table lookups
  • Vectorize component update operations via SIMD instruction sets
  • Perform jobs in batches across cores (helps with cache/false sharing issues)
  • Interleaved allocation is a powerful tool for leveraging SIMD and other parallelization techniques


Concluding Thoughts
The session was a great way to open up the AI summit, cramming in vast amounts of valuable advice and information in the one time-slot when everyone's mind was guaranteed to not already be turned into jelly. Although not necessarily new details to many of those experienced with component architectures, there were plenty of nuggets to guide the decision-making process of both novice and veteran architects alike. An informal poll of the audience suggested that a substantial portion of those in attendance learned at least something valuable to take back to their own individual design efforts - the hallmark of a truly successful session.
Previous Entry The Circle of GDC Life
3 likes 0 comments

Comments

Nobody has left a comment. You can be the first!
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement