The opposite of brittleness in complex systems. David Woods presents a theory of graceful extensibility. This paper is dense and will reward further study. Know that it is not for the faint of heart. The Theory of Graceful Extensibility: basic rules that govern adaptive systems. David D. Woods. Department of Integrated Systems Engineering Ohio State University. September 2018. pdf
All systems have an envelope of performance, or a range of adaptive behavior, due to **finite resources** and **continuous change**. There is a transition zone where a system shifts regimes of performance when pushed to the edge of its envelope. Brittleness and graceful extensibility describe how the system responds while exceeding its envelope of performance. Graceful extensibility refers to a system’s ability to adapt how it works to extend performance past the boundary into a new regime of performance invoking new resources, responses, relationships, and priorities.
At the heart of the theory of graceful extensibility is the fundamental concept of managing **risk of saturation** via regulating the **capacity for maneuver**.
The ten statements are organized into three groups. Inside each group there are relationships between the statements that complement, contrast, and compound each other. Between the three groups there are similar relationships of complement, contrast, and compounding. The diagram below over-simplifies these relationships. Like the systems these statements describe, things are much more entangled and woven together than any single model can capture. All models are wrong. We hope this model of the statements and groupings may be useful.
strict digraph { compound=true node [shape=box style=filled fillcolor=bisque] subgraph cluster_MRoS { label="Managing Risk of Saturation" { S2 S1 } -> S3 } subgraph cluster_NoAU { label="Networks of Adaptive Units" { S5 S4 } -> S6 } subgraph cluster_OC { label="Outmaneuvering Constraints" { S9 S8 S7 } -> S10 } S3 -> S8 [ ltail=cluster_MRoS lhead=cluster_OC minlen=2 ] S6 -> S8 [ ltail=cluster_NoAU lhead=cluster_OC minlen=2 ] }
# Subset A: Managing Risk of Saturation
S1: The adaptive capacity of any unit at any scale is finite, therefore, all units have bounds on their range of adaptive behavior, or capacity for maneuver.
S2: Events will occur outside the bounds and will challenge the adaptive capacity of any unit, therefore, surprise continues to occur and demands response, otherwise the unit is brittle and subject to collapse in performance.
S3: All units risk saturation of their adaptive capacity, therefore, units require some means to modify or extend their adaptive capacity to manage the risk of saturation when demands threaten to exhaust their base range of adaptive behavior.
# Subset B: Networks of Adaptive Units
S4: No single unit, regardless of level or scope, can have sufficient range of adaptive behavior to manage the risk of saturation alone, therefore, alignment and coordination are needed across multiple interdependent units in a network.
S5: Neighboring units in the network can monitor and influence—constrict or extend— the capacity of other units to manage their risk of saturation, therefore, the effective range of any set of units depends on how neighbors influence others, as the risk of saturation increases.
S6: As other interdependent units pursue their goals, they modify the pressures experienced by a unit of adaptive behavior which changes how that unit defines and searches for good operating points in a multi-dimensional trade space.
# Subset C: Outmaneuvering Constraints
S7: Performance of any unit as it approaches saturation is different from the performance of that unit when it operates far from saturation, therefore there are two fundamental forms of adaptive capacity for units to be viable—base and extended, both necessary but inter-constrained.
S8: All adaptive units are local—constrained based on their position relative to the world and relative to other units in the network, therefore there is no best or omniscient location in the network
S9: There are bounds on the perspective of any unit—the view from any point of observation at any point in time simultaneously reveals and obscures properties of the environment—but this limit is overcome by shifting and contrasting over multiple perspectives.
S10: There are limits on how well a unit’s model of its own and others’ adaptive capacity can match actual capability, therefore, mis-calibration is the norm and ongoing efforts are required to improve the match and reduce mis-calibration (adaptive units, at least those with human participation, are reflective, but mis-calibrated).
.
This paper expands on Human Performance in Systems