By Robbin Laird
For decades, national security establishments have organized around crisis management or the structured response to disruptions within fundamentally stable systems.
The Cuban Missile Crisis, though terrifying, operated within understood parameters: known actors, measurable capabilities, calculable escalation ladders.
Even the most dangerous moments followed a logic that skilled diplomats and military planners could navigate.
The system bent under stress but retained its essential shape.
That era is ending.
My work over the past several years has documented a profound shift from crisis management to what I call “chaos management” or operating in environments where the fundamental parameters are themselves in flux, where traditional indicators fail, and where the velocity of change outpaces institutional adaptation.
The OpenAI paper on AI and international security, published in February 2026, provides the technical substrate for understanding why this shift is not merely evolutionary but represents a phase transition in the character of strategic competition.
Crisis management presumes several conditions that shaped Cold War-era thinking and persist in contemporary doctrine:
Stable baselines: The “normal” state of the system is known and relatively predictable. Crises are departures from this baseline, dangerous but temporary.
Bounded uncertainty: While specific events may surprise, the range of possibilities is constrained. Nuclear yields, missile ranges, submarine patrol areas—these could be estimated with useful precision.
Observable indicators: Intelligence communities developed sophisticated methods for detecting threats. Satellite imagery tracked missile deployments. Signals intelligence monitored communications. Human sources provided insight into intentions.
Measured timescales: Even rapid developments, a mobilization, a blockade, a weapons test, unfolded over days or weeks. There was time for deliberation, consultation, messaging through back channels.
Human-centric dynamics: The key variables were human decisions, organizational processes, and political calculations. These could be slow, irrational, or opaque, but they operated at human speed with human constraints.
The shift to chaos management reflects the erosion of each of these conditions.
This isn’t about increased complexity alone.
Complex adaptive systems have always characterized international relations.
It’s about the velocity, opacity, and fundamentally different character of change when general-purpose AI enters the strategic environment.
The OpenAI framework identifies three pathways by which AI reshapes international security.
Each pathway maps directly onto mechanisms that transform crisis management into chaos management.
- Temporal Compression: When Planning Horizons Collapse
The first mechanism is the radical compression of timescales for both threat development and operational decision-making.
Consider the submarine detection scenario from the OpenAI paper. Current defense planning assumes that advances in undersea sensing will follow historical patterns, incremental improvements requiring large-scale investments, observable research programs, and deployment timelines measured in years. This allows for structured responses: investment in countermeasures, diversification of deterrent forces, diplomatic initiatives to manage transitions.
If AI compresses a century of materials science and signal processing into a decade, this planning paradigm fails. The threat materializes faster than acquisition cycles can respond.
But more fundamentally, the predictability of the threat environment collapses.
Defense planners cannot know whether their platforms will remain viable for five years or fifty.
Uncertainty of this magnitude doesn’t just complicate planning. It makes traditional planning frameworks incoherent.
This is chaos management: operating when you cannot reliably project even the basic parameters of the competitive environment into the near future.
My field research with Marine Corps aviation transformation, particularly at exercises like Steel Knight 2025, revealed similar dynamics at the tactical level. The integration of digital interoperability, autonomous systems, and AI-enabled decision support is already compressing decision cycles in ways that stress existing command structures. Marines speak of the challenge of “going quiet to think” when adversaries can exploit any pause. AI doesn’t merely speed up familiar processes. It changes what kinds of operations are feasible and forces adaptation to tempo that exceeds comfortable human cognitive bandwidth.
The OpenAI paper extends this to strategic competition.
When AI enables “planning depth” that crosses critical thresholds or the ability to see consequences beyond an adversary’s horizon, the nature of strategic interaction changes.
One side can set traps the other cannot avoid. Deception becomes asymmetric.
The slower side isn’t just disadvantaged; they’re operating in a fundamentally different game.
- Structural Opacity: The Failure of Traditional Intelligence
The second mechanism is the breakdown of observable indicators that have historically provided strategic warning and enabled crisis management.
The erosion of secrecy discussed in the OpenAI paper represents more than an intelligence problem. It’s a challenge to the entire architecture of strategic stability.
Arms control regimes from SALT to New START depended on transparency and verification. Confidence-building measures worked because capabilities could be observed, counted, and limited through agreement. Even adversaries could develop shared understandings of the strategic environment.
AI threatens this in two directions simultaneously. First, it may enable inference of protected information from ostensibly unclassified data. If AI can reconstruct classified deliberations from patterns in public statements and observable actions, or discover “technological secrets” through autonomous research in datacenters, then classification systems become porous. The information landscape becomes fundamentally less controllable.
Second, and perhaps more destabilizing, AI-driven breakthroughs may occur with minimal observable signature. The OpenAI paper emphasizes this: major advances in cryptanalysis, materials science, or algorithmic efficiency could happen entirely within secure computing facilities. There’s no missile test to satellite-image, no procurement program to track through supply chains, no observable deployment that provides warning.
This is the essence of chaos management, functioning when your primary mechanisms for understanding the strategic environment have become unreliable.
Traditional crisis management assumes you can see threats developing and calculate responses.
In a chaos environment, the first indication of a breakthrough may be its operational deployment, or worse, its exploitation against you.
My work on European defense transformation and NATO adaptation has revealed similar patterns. The hybrid warfare environment, combining conventional forces, cyber operations, information warfare, and economic coercion, already challenges traditional indicators and warnings. AI acceleration amplifies this by orders of magnitude. When advances can be both rapid and opaque, the distinction between peacetime competition and wartime preparation blurs beyond recognition.
- Threshold Effects: Discontinuous Strategic Transitions
The third mechanism involves discontinuous changes in capability that invalidate existing strategic calculations.
Crisis management frameworks assume marginal changes. One side develops a better tank, faster aircraft, or more accurate missile. The other side responds with countermeasures or symmetric capabilities. The competition is continuous, advantages can be measured and countered incrementally.
The OpenAI paper highlights the possibility of “threshold effects” where AI capabilities improve abruptly rather than gradually. This isn’t about linear scaling. It’s about phase transitions. A model that can plan five moves ahead operates in the same conceptual space as one that plans seven moves ahead. But a model that can reliably plan fifteen moves ahead when adversaries can only plan seven creates qualitatively different strategic possibilities.
The paper frames this through “spiky AI” or systems with extraordinary capabilities in narrow domains. We’re already seeing this in cyber operations. AI models demonstrate capability jumps in code analysis, vulnerability discovery, and exploit development that don’t follow smooth improvement curves. Anthropic recently disclosed disrupting the first AI-orchestrated cyber espionage campaign. The threshold from “AI-assisted” to “AI-orchestrated” operations isn’t gradual.
Applied to broader strategic competition, threshold effects create the conditions for what the OpenAI paper calls “false stability.” If capabilities improve gradually, nations can adapt incrementally. But if capabilities improve in jumps, if there are discrete thresholds where compute resources or algorithmic improvements suddenly enable qualitatively different operations, then the period of apparent stability is illusory. The system looks stable until it suddenly isn’t.
This is chaos management: operating in a strategic environment characterized by potential discontinuities you cannot reliably predict or prepare for through traditional planning methods.
One of the most troubling aspects of the crisis-to-chaos transition involves the potential for democratic disadvantage. The OpenAI paper notes concerns raised by national security leaders about authoritarian governments exploiting AI “without democratic accountability.”
In crisis management frameworks, democratic deliberation is valuable. Time for debate, legislative oversight, and public scrutiny improves decision quality and builds legitimacy. The Cuban Missile Crisis, for all its dangers, allowed for careful deliberation within ExComm and consideration of alternatives.
In chaos management environments, these strengths may become vulnerabilities. If AI-enabled decision compression rewards speed over deliberation, if opacity favors systems that can integrate AI into surveillance and control without legal constraints, if institutional adaptation requires top-down coordination rather than democratic consensus-building, then authoritarian systems may possess structural advantages.
My research on European defense and NATO burden-sharing has documented the challenge of coordinating 32 democratic nations with different threat perceptions, budget cycles, and domestic political constraints. Adding AI acceleration to this environment amplifies the coordination problem. China’s civil-military fusion strategy and Russia’s increasingly centralized security apparatus may prove better suited to rapid AI integration, not because authoritarian systems make better decisions, but because they can make faster decisions and implement them without the friction of democratic process.
Yet this may prove shortsighted.
Crisis management succeeded in part because democratic systems, despite their slowness, produced more robust and adaptive responses.
The question for chaos management is whether the same holds when tempo increases by an order of magnitude, or whether new institutional forms are needed that preserve democratic accountability while enabling speed.
The transition from crisis to chaos management demands fundamental rethinking across several dimensions:
Resilience over optimization: Traditional defense planning optimizes for known threats or the “threat-based” approach that dominated post-Cold War acquisition. Chaos management requires resilience to unknown and rapidly-evolving threats. This means redundancy, diversity, and adaptability rather than efficient specialization. My work on Marine Corps Force Design 2030 suggests this shift is underway tactically, but strategic-level adaptation lags.
Continuous adaptation over periodic planning: Crisis management uses planning cycles, PPBE, QDRs, five-year defense plans. These presume a future you can plan toward. Chaos management requires treating strategy as continuous rather than episodic. Organizations must adapt in real-time to an environment that won’t stabilize long enough for traditional planning cycles to complete.
Distributed authority over centralized control: When decision cycles compress below the time required for centralized approval, authority must be distributed. This is already evident in concepts like Distributed Maritime Operations and the kill web frameworks emerging from my research on autonomous systems. But extending this to strategic decision-making raises profound questions about risk, accountability, and the role of human judgment.
Transparency as a strategic asset: If secrecy becomes less maintainable due to AI inference capabilities, the value of transparency as a stabilizing mechanism increases. This seems counterintuitive but reflects a deeper shift—in chaos environments, coordination with potential adversaries to avoid inadvertent escalation may matter more than temporary advantage from concealment.
International coordination mechanisms: The OpenAI paper emphasizes the need for “coordinated, large-scale effort” comparable to the arms control architecture built during the Cold War. But chaos management may require more dynamic mechanisms, not treaties negotiated over decades but adaptive regimes that can evolve as AI capabilities shift.
My work on Coast Guard transformation illustrates these challenges at a service level. The Coast Guard operates in an environment that already exhibits chaos characteristics: diverse mission sets, resource constraints, rapidly evolving threats from asymmetric actors, and requirements for continuous presence rather than episodic crisis response.
AI offers the Coast Guard enormous potential, enhanced maritime domain awareness, improved search and rescue coordination, more effective drug interdiction.
But integration faces all the challenges of chaos management: how to build trust in AI-enabled systems, how to maintain human oversight when operating tempo increases, how to adapt acquisition and training faster than the threat environment evolves.
The Coast Guard’s strategic role in major power competition, Arctic operations, infrastructure protection, partnership-building, positions it at the intersection of traditional law enforcement and emerging strategic competition. The service’s experience may offer lessons for managing the crisis-to-chaos transition at higher levels.
The OpenAI paper concludes that much of the argument about AI timelines has collapsed to the difference between two years and ten years, both short compared to institutional change timescales. This is precisely the challenge chaos management addresses: how to function when you know major change is coming but cannot predict its precise form or timing.
The shift from crisis to chaos management isn’t about abandoning structure for improvisation.
It’s about building different kinds of structures—resilient rather than optimized, adaptive rather than static, distributed rather than centralized.
It requires accepting that we cannot return to the comfortable predictability of Cold War-era crisis management, where baselines were stable and futures were calculable.
My research across Marine Corps transformation, European defense adaptation, and Coast Guard modernization documents this transition at multiple levels. The OpenAI framework provides the technical explanation for why this transition is accelerating and why traditional approaches are inadequate.
The question isn’t whether we prefer crisis management or chaos management. The choice has already been made by technological and geopolitical forces beyond any single nation’s control.
The question is whether we can build the institutional capacity, strategic frameworks, and international mechanisms to manage chaos before events force reactive decisions under the worst possible conditions.
In this sense, the shift from crisis to chaos management represents the central strategic challenge of our era.
AI doesn’t merely add another variable to existing frameworks. It transforms the fundamental character of strategic competition.
Those who adapt their thinking accordingly will shape the future.
Those who cling to crisis management paradigms will find themselves overtaken by a reality they no longer understand.
Note: I am publishing my new book on chaos management in May along with an omnibus edition which includes the two books which precede the chaos management book.


AI and International Security: Beyond Weapons to the Foundations of Power
