By Robbin Laird
In February 2026, a team from OpenAI published what may become a landmark document in the emerging field of AI geopolitics.
“AI and International Security: Pathways of Impact and Key Uncertainties” represents something unusual: a major AI laboratory attempting to map how its own technology could reshape the global balance of power.
Drawing on interviews with former Secretaries of Defense, National Security Advisors, and senior officials from national laboratories, the paper makes a striking argument. AI’s most significant effects on international security won’t come from autonomous weapons or cyber operations, but from how it transforms the fundamental structures that underpin national power.
The authors, led by Jason Pruet and including OpenAI’s Chief Economist, frame their analysis through historical precedent. They point not to specific weapons systems but to general-purpose technologies that restructured the foundations of military power. The marine chronometer enabled precise longitude determination, making naval forces effective in waters where they’d previously been helpless. The electric telegraph collapsed command-and-control timelines in the mid-19th century. Even the humble stirrup is credited with reshaping medieval warfare and social institutions by changing how mounted combat worked.
This framing is deliberate and consequential.
Much of the public discourse on AI and security focuses on near-term applications: swarms of autonomous drones, AI-enhanced cyber attacks, or algorithmically-targeted disinformation.
The OpenAI paper doesn’t dismiss these concerns but argues they miss the larger story. As Paul Kennedy showed in The Rise and Fall of the Great Powers, long-run shifts in economic strength and general-purpose technological advantages have historically mattered more than specific weapons for determining who dominates the international system.
The paper’s central provocation is this: private sector spending on AI development may now exceed the military R&D budgets of nearly every country in the world. This means governments will gain access to AI-enabled capabilities, particularly scientific acceleration, generated by an effort comparable in scale to their own defense research enterprises.
But they won’t control the timeline, the priorities, or the fundamental characteristics of these systems.
The analysis organizes AI’s impacts across three dimensions, each associated with critical technical uncertainties that urgently need resolution.
-
Deterrence and Force Projection: The Compression of Military Science
Consider this scenario from the paper: AI increases the rate of progress in undersea sensing technology by a factor of ten. By 2040, we have capabilities that wouldn’t have been expected until the end of the century. Before the last of twelve planned Columbia-class nuclear submarines has its first deployment, the state of science will have surpassed that at its planned end-of-life by fifty years.
This matters because submarines carrying ballistic missiles provide a crucial leg of nuclear deterrence for several NATO countries and Russia. Their value rests on being difficult to detect. If AI dramatically accelerates progress in quantum magnetometry, signal processing, or materials science relevant to detection, strategic stability could erode faster than acquisition and planning cycles can adapt.
The paper cites ongoing research using generative AI for discovering high-temperature superconductors, materials that could make distributed submarine detection platforms practical. While no breakthrough has been confirmed, the possibility illustrates a broader point: we have no organizational structures, culture, or experience preparing us for a century of military science occurring every decade.
The key uncertainty here isn’t whether AI can improve specific systems. It clearly can. The question is: Will AI markedly accelerate fundamental scientific discovery in areas relevant to military power?
The span of expert opinion ranges from skepticism (AI may even slow scientific progress by encouraging reliance on flawed but predictively accurate theories) to Microsoft’s CEO declaring a goal to “compress the next 250 years of chemistry and materials science progress into the next 25.”
Current security plans were developed assuming historical rates of scientific progress.
Radical uncertainty about whether that assumption still holds makes it impossible to assess whether our defense architecture remains viable.
-
Resources for National Power: When Computing Becomes Strategic
The paper’s second pathway examines how AI reshapes dependencies on essential resources.
Could computing capacity become as strategically critical as rare earth elements, uranium deposits, or oil reserves once were?
This question has immediate implications for the balance of power.
If meaningful AI capabilities require vastly more computing than currently deployed, only the United States and perhaps China could develop truly transformative systems.
Other nations would face technological stagnation unless international agreements emerged or they would become increasingly dependent on the few countries controlling frontier AI.
Conversely, if the barriers are lower, smaller states or even non-state actors might access capabilities that currently require superpower-scale resources.
The distribution of power would flatten in unpredictable ways.
The paper frames this through a game-theoretic lens: asymmetries in AI computing capacity create incentives for preemption. Imagine two nations, one leading in AI development. If AI provides gradual benefits (“Case 1” in the paper), the nation that’s behind has little reason to attack. They lose more from conflict than from falling somewhat behind.
But if AI provides abrupt, decisive advantages (“Case 2”), the weaker side faces a Cuban Missile Crisis-style dilemma: strike before the window closes, or accept permanent inferiority.
According to economic modeling cited in the paper, if AGI requires only 2,500 times the computing used for today’s largest training runs, it could double global productivity in three years. That’s not a distant possibility. It’s potentially within current investment trajectories.
Yet we have no established metrics for tracking “National Inference Compute” or “Compute Mobilization Latency” comparable to how we monitor nuclear stockpiles or military readiness.
-
Understanding the Environment: The Erosion of Secrecy
The third pathway may be the most unsettling: AI’s potential to fundamentally undermine secrecy, the bedrock of military and diplomatic planning.
The paper distinguishes between “social secrets” (deliberations, decisions, plans) and “technological secrets” (physical principles, algorithms, designs that can be discovered independently).
AI threatens both.
Statistical methods already allow remarkably strong inferences about human behavior from seemingly innocuous data. In an extreme limit, AI might reconstruct what was said in closed-door cabinet meetings without any spies or leaks, simply by analyzing patterns in subsequent actions, public statements, and observable outcomes.
Technological secrets face different threats. The paper notes that Britain’s GCHQ independently discovered public key cryptography years before it appeared in open literature but kept it classified. Things like cryptographic algorithms or physics applications in weapons can be discovered by anyone smart enough—or any AI capable enough. If AI systems can make such discoveries autonomously within datacenters, protected information could be exposed without any theft of blueprints or human espionage.
The historical parallel is sobering.
Shor’s algorithm for breaking encryption via quantum computing was published in 1994. It took until 2015 or twenty years for NIST to begin developing post-quantum cryptography standards. Even then, the concern about “harvest now, decrypt later” attacks meant encrypted data captured today might be vulnerable to future quantum computers.
But organizational change takes time.
Even with trusted quantum-resistant algorithms available, it would require years for institutions to fully transition.
With AI advancing at current pace, we may not have twenty years between threat identification and deployment of countermeasures.
The paper emphasizes an urgent need for large-scale studies quantifying frontier AI models’ impact on both scientific discovery and the ability to infer protected information.
What emerges from these three pathways is a portrait of profound technical uncertainty. The paper documents interviews with senior national security leaders struggling with this reality. As Lieutenant General Jack Shanahan (ret.) put it: “The problem is massive uncertainty. Decision-makers are torn between claims that ‘this will end the human race’ and ‘this can’t add 4 digit numbers’.”
This uncertainty creates what the authors call “false stability” or a period where inaction seems prudent because the future is unclear.
If we don’t know whether AI will accelerate science, provide decisive advantages in strategic planning, or undermine secrecy, costly adaptation measures seem premature. But if these capabilities arrive suddenly, the result is reactive crisis decision-making under extreme time pressure, exactly the conditions most likely to produce catastrophic miscalculation.
The paper draws an explicit parallel to the early Cold War, when a spectrum of powerful new technologies (nuclear weapons, ballistic missiles, satellite reconnaissance) required decades of work, new disciplines, and extensive collaboration between political leadership and technical experts to build stable deterrence frameworks.
Even with that effort, there were harrowing close calls and considerable luck. Finding ways to navigate the AI transition, the authors argue, will require comparable large-scale coordination but the current institutional landscape is radically unprepared.
Former Secretary of the Navy Richard Danzig captured the adoption challenge: “The impact of AI on the military is not predominantly dependent on the technology, but on the assimilation process. If I put a bounteous feast in front of you but your jaw is wired shut, you can’t eat.”
The paper’s core contribution is identifying specific technical uncertainties whose resolution would most improve our ability to navigate the AI transition.
These aren’t predictions but rather a framework for interpreting new information as it emerges:
- Scientific acceleration: Systematic measurements of how frontier reasoning models actually affect R&D productivity across domains relevant to military power.
- Compute requirements: Better understanding of the relationship between computing resources and meaningful capability thresholds.
- Diffusion dynamics: How quickly advantages in AI erode through algorithmic improvements, espionage, or independent development.
- Inference from data: Whether AI can reconstruct secrets from unclassified information or publicly observable patterns.
- Planning depth: At what scale AI-supported strategic planning crosses thresholds that change operational feasibility.
The authors emphasize that AI laboratories bear responsibility for providing this technical foundation, not because they’re responsible for international security, but because political and military leaders cannot make informed decisions without understanding what AI can and cannot do.
Several themes emerge that should concern anyone thinking about major power competition in the AI era:
- Compression of timelines: If AI accelerates R&D by even a factor of five, current acquisition programs will be obsolete before completion. The Columbia-class submarines mentioned earlier are planned to operate until the 2080s. If the science of detection advances fifty years faster than expected, deterrence assumptions collapse.
- Asymmetric transparency: AI may create a world where authoritarian states gain advantages in surveillance and control while democratic institutions struggle with legal constraints, civil liberties concerns, and the need for public debate. Yet those constraints may ultimately produce more robust systems. The paper doesn’t resolve this tension but flags it as critical.
- Strategic surprise without warning: If major breakthroughs happen inside datacenters with minimal observable signatures, traditional intelligence indicators fail. There may be no “Sputnik moment” or no visible launch that alerts competitors. The first sign could be deployment.
- The alliance problem: If only a few nations can develop frontier AI, alliance structures may strain. Why maintain expensive defense commitments to countries that lack the technological base to contribute meaningfully? Conversely, AI-enabled prosperity might strengthen alliances by increasing the stakes in preserving stability.
The OpenAI paper arrives at a curious moment. By the authors’ own account, we’re past the point where we can dismiss AI’s impact on international security as speculative. Current systems already demonstrate capabilities relevant to the pathways they describe.
Yet we lack coherent programs, organizational structures, or even shared vocabulary for addressing these challenges at scale.
The span between credible expert estimates from “normal technology” to “superintelligence creates decisive advantage” is so wide as to preclude coherent planning.
This is the paper’s central warning: we cannot afford to wait for certainty.
The sooner critical technical uncertainties are resolved, the more time exists for measures to preserve stability.
Whether through arms control frameworks adapted for AI, transparency regimes for compute capacity, international agreements on limits to certain capabilities, or entirely new institutional arrangements, managing the transition will require what the paper calls “a coordinated, large-scale effort.”
That effort doesn’t yet exist.
What does exist is a growing recognition that AI represents not just another technology to integrate into existing security frameworks, but a force that may require rethinking those frameworks entirely.
The marine chronometer didn’t just improve navigation. It changed which nations could project power and where.
The question facing us now is whether AI will prove similarly transformative, and whether we can build the understanding needed to navigate through a world of chaos and survive and thrive in the anarchy of the moment.
I will focus on this aspect of the challenge in my follow-on article to this one to be published later this week.
