➽ Why Technology Will Define the
Future of Geopolitics
When Russian forces marched on
Kyiv in February 2022, few thought Ukraine could survive. Russia had more than
twice as many soldiers as Ukraine. Its military budget was more than ten times
as large. The U.S. intelligence community estimated that Kyiv would fall within
one to two weeks at most.
Outgunned and outmanned, Ukraine
turned to one area in which it held an advantage over the enemy: technology.
Shortly after the invasion, the Ukrainian government
uploaded all its critical data to the cloud, so that it could safeguard
information and keep functioning even if Russian missiles turned its
ministerial offices into rubble. The country’s Ministry of Digital
Transformation, which Ukrainian President Volodymyr Zelensky had
established just two years earlier, repurposed its e-government mobile app,
Diia, for open-source intelligence collection, so that citizens could upload
photos and videos of enemy military units. With their communications
infrastructure in jeopardy, the Ukrainians turned to Starlink satellites and
ground stations provided by SpaceX to stay connected. When
Russia sent Iranian-made drones across the border, Ukraine acquired its own
drones specially designed to intercept their attacks — while its military
learned how to use unfamiliar weapons supplied by Western allies. In the
cat-and-mouse game of innovation, Ukraine simply proved nimbler. And so what
Russia had imagined would be a quick and easy invasion has turned out to be
anything but.
Ukraine’s success can be credited
in part to the resolve of the Ukrainian people, the weakness of the Russian
military, and the strength of Western support. But it also owes to the defining
new force of international politics: innovation power. Innovation power is the
ability to invent, adopt, and adapt new technologies. It contributes to both
hard and soft power. High-tech weapons systems increase military might, new
platforms and the standards that govern them provide economic leverage, and
cutting-edge research and technologies enhance global appeal. There is a long
tradition of states harnessing innovation to project power abroad, but what has
changed is the self-perpetuating nature of scientific
advances. Developments in artificial intelligence in
particular not only unlock new areas of scientific discovery; they also speed
up that very process. Artificial intelligence supercharges the ability of
scientists and engineers to discover ever more powerful technologies, fostering
advances in artificial intelligence itself as well as in other fields — and
reshaping the world in the process.
The ability to innovate faster
and better—the foundation on which military, economic, and cultural power now
rest — will determine the outcome of the great-power competition between the
United States and China. For now, the United States remains in the lead.
But China is catching up in many areas and has already
surged ahead in others. To emerge victorious from this century-defining
contest, business as usual will not do. Instead, the U.S. government will have
to overcome its stultified bureaucratic impulses, create favorable conditions
for innovation, and invest in the tools and talent needed to kick-start the
virtuous cycle of technological advancement. It needs to commit itself to
promoting innovation in the service of the country and in the service of
democracy. At stake is nothing less than the future of free societies, open
markets, democratic government, and the broader world order.
KNOWLEDGE IS POWER
The nexus between technological
innovation and global domination dates back centuries, from the muskets the
conquistador Francisco Pizarro wielded to defeat the Inca Empire to the
steamboats Commodore Matthew Perry commanded to force the opening of Japan.
But the sheer speed at which innovation is happening has no precedent. Nowhere
is this change clearer than in one of the foundational technologies of our
time: artificial intelligence.
Today’s AI systems can already
provide key advantages in the military domain, where they are able to parse
millions of inputs, identify patterns, and alert commanders to enemy activity.
The Ukrainian military, for example, has used AI to efficiently scan
intelligence, surveillance, and reconnaissance data from a variety of sources.
Increasingly, however, AI systems will move beyond merely assisting human
decision-making and start making decisions themselves. John Boyd, a military
strategist and U.S. Air Force colonel, coined the term “OODA loop” — observe,
orient, decide, act—to describe the decision-making process in combat.
Crucially, AI will be able to execute each part of the OODA loop much
faster. Conflict can happen at the speed of computers, not the speed of
people. As a result, command-and-control systems that rely on human
decision-makers — or, worse, complex military hierarchies—will lose out to
faster, more efficient systems that team machines with humans.
In previous eras, the
technologies that shaped geopolitics — from bronze to steel,
steam power to nuclear fission — were largely singular. There was a clear
threshold of technological mastery, and once a country reached it, the playing
field was leveled. Artificial intelligence, by contrast, is generative in
nature. By presenting a platform for continuous scientific and technological
innovation, it can lead to yet more innovation. That phenomenon makes the AI
age fundamentally different from the Bronze Age or the steel age. Rather than
natural resource wealth or mastery of a given technology, the source of a
country’s power now lies in its ability to continuously innovate.
This virtuous cycle will only get
faster and faster. Once quantum computing comes of age, superfast computers
will allow for the processing of ever-larger amounts of data, producing
ever-smarter AI systems. These AI systems, in turn, will be able to produce
breakthrough innovations in other emerging fields, from synthetic biology to
semiconductor manufacturing. Artificial intelligence will change the very
nature of scientific research. Instead of making progress one study at a time,
scientists will discover the answers to age-old questions by analyzing massive
data sets, freeing the world’s smartest minds to devote more time to developing
new ideas. As a foundational technology, AI will be critical in the race for
innovation power, lying behind countless future developments in drug discovery,
gene therapy, material science, and clean energy — and in AI itself. Faster
airplanes did not help build faster airplanes, but faster computers will help
build faster computers.
Even more powerful than today’s
artificial intelligence is a more comprehensive technology — for now, given
current computing power, still hypothetical — called “artificial general
intelligence,” or AGI. Whereas traditional AI is designed to solve a discrete
problem, AGI should be able to perform any mental task a human can and more.
Imagine an AI system that could answer seemingly intractable questions, such as
the best way to teach a million children English or to treat a case of
Alzheimer’s disease. The advent of AGI remains years, perhaps even decades,
away, but whichever country develops the technology first will have a massive
advantage, since it could then use AGI to develop ever more advanced versions
of AGI, gaining an edge in all other domains of science and technology in the
process. A breakthrough in this field could usher in an era of predominance not
unlike the short period of nuclear superiority the United States enjoyed in the
late 1940s.
Whereas many of AI’s most
transformative effects are still far off, innovation in drones is
already upending the battlefield. In 2020, Azerbaijan employed Turkish- and
Israeli-made drones to gain a decisive advantage in its war against Armenia in
the disputed Nagorno-Karabakh region, racking up battlefield victories after
more than two decades of military stalemate. Similarly, Ukraine’s fleet of
drones — many of which are low-cost commercial models repurposed for
reconnaissance behind enemy lines — have played a critical role in its successes.
Drones offer distinct advantages
over traditional weapons: they are smaller and cheaper, offer unmatched
surveillance capabilities, and reduce soldiers’ risk exposure. Marines in urban
warfare, for example, could be accompanied by microdrones that serve as their
eyes and ears. Over time, countries will improve the hardware and software
powering drones to outinnovate their rivals. Eventually, autonomous weaponized
drones — not just unmanned aerial vehicles but also ground-based ones — will
replace soldiers and manned artillery altogether. Imagine an autonomous
submarine that could quickly move supplies into contested waters or an
autonomous truck that could find the optimal route to carry small missile
launchers across rough terrain. Swarms of drones, networked and coordinated by
AI, could overwhelm tank and infantry formations in the field. In the Black
Sea, Ukraine has used drones to attack
Russian ships and supply vessels, helping a country with a minuscule navy
constrain Russia’s mighty Black Sea Fleet. Ukraine offers a preview of future
conflicts: wars that will be waged and won by humans and machines working
together.
As developments in drones make
clear, innovation power underlies military power. First and foremost,
technological dominance in crucial domains bolsters a country’s ability to wage
war and thus strengthens its deterrent capabilities. But innovation also shapes
economic power by giving states leverage over supply chains and the ability to
make the rules for others. Countries reliant on natural resources or trade,
especially those that must import rare or foundational goods, face
vulnerabilities others do not.
Consider the power China can
wield over the countries it supplies with communications hardware. It is no
surprise that countries dependent on Chinese - supplied infrastructure — such
as many countries in Africa, where components produced by
Huawei make up about 70 percent of 4G networks — have been loath to criticize
Chinese human rights violations. Taiwan’s primacy in semiconductor
manufacturing, likewise, provides a powerful deterrent against invasion, since
China has little interest in destroying its largest source of microchips.
Leverage also accrues to countries pioneering new technologies. The United
States, thanks to its role in the foundation of the Internet, has for decades
enjoyed a seat at the table defining Internet regulations. During the Arab
Spring, for example, the fact that the United States was home to
technology companies that provided the backbone of the Internet enabled those
companies to refuse Arab governments’ censorship requests.
Less obvious but also crucial,
technological innovation buoys a country’s soft power. Hollywood and tech
companies such as Netflix and YouTube have built up a trove of content for an
increasingly global consumer base — all the while, helping spread American
values. Such streaming services project the American way of life into living
rooms around the world. Similarly, the prestige associated with U.S.
universities and the opportunities for wealth creation created by U.S. companies
attract strivers from across the globe. In short, a country’s ability to
project power in the international sphere — militarily, economically, and
culturally — depends on its ability to innovate faster and better than its
competitors.
RACE TO THE TOP
The main reason innovation now
lends such a massive advantage is that it begets more innovation. In part, it
does so because of the path dependency that arises from clusters of scientists
attracting, teaching, and training other great scientists at research
universities and large technology companies. But it also does so because
innovation builds on itself. Innovation relies on a loop of invention,
adoption, and adaptation — a feedback cycle that fuels yet more innovation. If
any link in the chain breaks, so, too, does a country’s ability to innovate
effectively.
A lead in invention is typically
built on years of prior research. Consider the way the United
States led the world into the 4G era of telecommunications.
The rollout of 4G networks across the country facilitated the early development
of mobile applications such as Uber that required faster cellular data
connections. With that lead, Uber was able to refine its product in the United States
so it could roll it out in developing countries. This led to many more
customers — and much more feedback to incorporate — as the company adapted its
product for new markets and new releases.
But the moat around countries
that enjoy structural advantages in technology is shrinking. Thanks in part to
more accessible academic research and the rise of open-source software,
technologies now diffuse more quickly around the world. The availability of new
advances has helped competitors catch up at record speed, as China eventually
did in 4G. Although some of China’s recent technological success stems from
economic espionage and a disregard for patents, much of it traces back to
innovative, rather than derivative, efforts to adapt and implement new
technology.
Indeed, Chinese companies have
enjoyed resounding success in adopting and commercializing foreign
technological breakthroughs. In 2015, the Chinese Communist Party laid out its
“Made in China 2025” strategy to achieve self-sufficiency in high-tech
industries such as telecommunications and AI. As part of this bid, it announced
an economic plan of “dual circulation,” whereby China intends to boost both
domestic and foreign demand for its goods. Through public-private partnerships,
direct subsidies to private companies, and support for state-backed companies,
Beijing has poured billions of dollars into ensuring it comes out ahead in the
race for technological supremacy. So far, the record is mixed. China is ahead
of the United States in some technologies yet lags in others.
It is hard to say whether China
will seize the lead in AI, but top officials in Beijing certainly think it
will. In 2017, Beijing announced plans to become the global leader in
artificial intelligence by 2030, and it may achieve that goal even earlier than
expected. China has already accomplished its goal of becoming the world’s
leader in AI-based surveillance technology, which it not only uses to control
dissidents at home but also sells to authoritarian governments abroad. China
still ranks behind the United States in attracting the best minds in AI, with
almost 60 percent of top-tier researchers working in U.S.
universities. But China’s loose privacy laws, mandatory data collection,
and targeted government funding give the country a key advantage. Indeed, it
already leads in the production of autonomous vehicles.
For now, the United States still
retains an edge in quantum computing. Yet over the past decade, China has
invested at least $10 billion in quantum technology,
roughly ten times as much as the U.S. government. China is working to build
quantum computers so powerful that they will easily crack today’s encryption.
The country is also investing heavily in quantum networks — a way of transmitting
information in the form of quantum bits — presumably in the hope that such
networks would be impervious to monitoring by other intelligence agencies. Even
more alarming, the Chinese government may already be storing stolen and
intercepted communications with an eye to decrypting them once it possesses the
computing power to do so, a strategy known as “store now, decrypt later.” When
quantum computers become fast enough, all communications encrypted through
non-quantum methods will be at risk for interception, raising the stakes of
achieving this breakthrough first.
China is also actively trying to
catch up with the United States in synthetic biology. Scientists in this field
are working on a range of new biological developments, including microbe-made
cement that absorbs carbon dioxide, crops with an increased ability to
sequester carbon, and plant-based meat substitutes. Such technology holds
enormous promise to fight climate change and create jobs,
but since 2019, Chinese private investment in synthetic biology has outpaced
U.S. investment.
When it comes to semiconductors,
China has ambitious plans, too. The Chinese government is funding unprecedented
efforts to become a leader in semiconductor manufacturing by 2030. Chinese
companies are currently creating what are known in the industry as “seven
nanometer” chips, but Beijing has set its sights further, announcing plans to
domestically produce the new generation of “five nanometer” chips. For now, the
United States continues to outperform China in semiconductor design, as do
U.S.-aligned Taiwan and South Korea. In October 2022, the Biden
administration took the important step of blocking leading
U.S. companies producing AI computer chips from selling to China as part of a
package of restrictions released by the Department of Commerce. Yet Chinese
companies control 85 percent of the processing of the rare-earth minerals that
go into these chips and other critical electronics, offering an important point
of leverage over their competitors.
A BATTLE OF SYSTEMS
The competition between the
United States and China is as much a competition between systems as it is between
states. In the Chinese model of civil-military fusion, the government promotes
domestic competition and funds emerging winners as “national champions.”
These companies play a dual role,
maximizing commercial success and advancing Chinese national security
interests. The American model, on the other hand, relies on a more disparate
set of private actors. The federal government provides funding to basic science
but largely leaves innovation and commercialization to the market.
For a long time, the trifecta of
government, industry, and academia was the primary source of American
innovation. This collaboration drove many technological breakthroughs, from the
moon landing to the Internet. But with the end of the Cold
War, the U.S. government grew averse to allocating funding for
applied research, and it even lowered the amount devoted to fundamental
research. Although private spending has taken off, public investments have
plateaued over the past half century. In 2015, the share of government funding
for basic research dropped below 50 percent for the first time since the end
of World War II, having hovered around 70 percent in the 1960s.
Meanwhile, the geometry of innovation — the respective role of public and
private players in driving technological progress — has changed since the
Cold War, in ways that have not always yielded what the country needs. The
rise of venture capital helped accelerate adoption and commercialization, but
it did little to address higher-order scientific problems.
The reasons for Washington’s
reluctance to fund the science that serves as the
foundation of innovation power are structural. Innovation requires risk and, at
times, failure — something politicians are loath to accept. Innovation can
demand long-term investments, but the U.S. government operates on a single-year
budget cycle and, at most, a two-year political cycle. Despite these obstacles,
Silicon Valley (along with other hot spots in the United States) has still
managed to encourage innovation. The American success story relies on a potent
mix of inspiring ambition, startup-friendly legal and tax regimes, and a
culture of openness that allows entrepreneurs and researchers to iterate and
improve on new ideas.
That may not be enough, however.
Government support has long played a critical role in jump-starting innovation
in the United States, and research in technologies that seem outlandish now may
prove critical in the not-too-distant future. In 2013, for example, the Defense
Advanced Research Projects Agency invested in messenger RNA vaccines,
working with the biotech company Moderna, which would later develop and deliver
a COVID-19 vaccine in record time. But such examples are
rarer than they should be.
Competition with China demands a
reenergizing of the interplay among the government, the private sector, and
academia. Just as the Cold War led to the creation of the National Security
Council, today’s tech-fueled competition should spur a rethinking of existing
policymaking structures. As the National Security Commission on Artificial
Intelligence (which I chaired) recommended, a new “technology competitiveness
council,” inspired by the NSC, could help coordinate action among private
actors and develop a national plan to advance crucial emerging technologies. In
a promising sign, Congress appears to have recognized the need for decisive
support. In 2022, in a bipartisan vote, it passed the CHIPS and Science Act,
which directs $200 billion in funding for scientific R & D over the next
ten years.
INVESTING IN THE FUTURE
As part of its effort to ensure
that it remains an innovation superpower, the United States will need to invest
billions of dollars in key areas of technological competition. In
semiconductors, perhaps the most vital technology today, the U.S. government
should redouble its efforts to onshore and “friend shore” supply chains,
relocating them to the United States or friendly countries. In renewable
energy, it should fund R & D for microelectronics, stockpile the rare-earth
minerals (such as lithium and cobalt) needed for batteries and electric
vehicles, and invest in new technologies that can replace lithium-ion batteries
and offset China’s resource dominance. Meanwhile, the rollout of 5G in the
United States has been slow, in part because government agencies - most
notably, the Department of Defense — control most of the high-frequency radio
spectrum that 5G uses. To catch up with China, the Pentagon should open up more
of the spectrum to private actors.
The United States will need to
invest in all parts of the innovation cycle, funding not just basic research
but also commercialization. Meaningful innovation requires both invention and
implementation, the ability to execute and commercialize new inventions at
scale. This is often the main stumbling block. Research in electric cars, for
example, helped General Motors bring its first model onto the market in 1996,
but it took two more decades before Tesla mass-produced a commercially viable
model. Every new technology, from AI to quantum computing to synthetic biology,
must be pursued with the clear goal of commercialization.
In addition to directly investing
in the technologies that fuel innovation power, the United States must invest
in the input that lies at the core of innovation: talent. The United States
boasts the world’s top startups, incumbent companies, and universities, all of
which attract the best and the brightest from around the world. Yet too many
talented people are prevented from coming to the United States by its outdated
immigration system. Instead of creating an easy path to a green card for
foreigners who earn STEM degrees from American schools, the current system
makes it needlessly difficult for top graduates to contribute to the U.S.
economy.
The United States has an
asymmetric advantage when it comes to employing highly skilled immigrants, and
its enviable living standards and abundant opportunities explain why the
country has attracted most of the world’s brightest AI minds. More than half of
all AI researchers working in the United States hail from abroad, and the
demand for AI talent still far exceeds supply. If the United States closes its
doors to talented immigrants, it risks losing its innovative edge. Just as the
Manhattan Project was led in large part by refugees and émigrés from Europe, the
next American technological breakthrough will almost certainly rely on
immigrants.
THE BEST DEFENSE
As part of its efforts to
translate innovation into hard power, the United States must fundamentally
rethink some of its defense policies. During the Cold War, the country designed
various “offset” strategies to counterbalance Soviet numerical superiority
through military strategy and technological innovations. Today, Washington
needs what the Special Competitive Studies Project has called an “Offset-X” strategy,
a competitive approach through which the United States can maintain
technological and military superiority.
Given how much modern militaries
and economies rely on digital infrastructure, any future great-power war is
likely to start with a cyberstrike. The United States’ cyberdefenses,
therefore, need a response time faster than humans’ reaction time. Having faced
constant cyberattacks even in peacetime, the United States should armor itself
with redundancy, creating backup systems and alternative paths for data flows.
What starts in cyberspace could
easily escalate into the physical realm, and there, too, the United States will
need to meet new challenges. To counter possible swarm drone attacks, it must
invest in defensive artillery and missile systems. To improve battlefield
awareness, the U.S. military should focus on deploying a network of inexpensive
sensors powered by AI to monitor contested areas, an approach that is often
more effective than a single, exquisitely crafted system. As human intelligence
becomes harder to obtain, the United States must increasingly rely on the
largest constellation of sensors of any country, ranging from undersea to outer
space. It will also need to focus more on open-source intelligence, given
that most of the world’s data today is publicly available. Without this
capability, the United States risks being surprised by its intelligence
failures.
When it comes to actual fighting,
military units should be networked and decentralized to better outmaneuver
opponents. Facing adversaries with rigid military hierarchies, the United
States could gain an advantage by using smaller, more connected units whose
members are adept at network-based decision-making, employing the tools of
artificial intelligence to their advantage. For example, a single unit could
bring together capabilities in intelligence collection, long-range missile
attacks, and electronic warfare. The Pentagon needs to provide battlefield
commanders with all the best information and allow them to make the best choices
on the ground.
The U.S. military must also learn
to integrate new technologies into its procurement process, battle plans, and
warfighting. In the four years that I chaired the Defense Innovation Board, I
was astounded by how difficult this was to do. A major bottleneck is the
Pentagon’s burdensome procurement process: major weapons systems take more than
ten years to design, develop, and deploy. The Department of Defense should look
for inspiration in the way the tech industry designs products. It should build
missiles the way companies now build electric cars, using a design studio to
develop and simulate software, looking for innovations ten times as fast and as
cost-effective as current processes. The current procurement system is
especially ill suited for a future in which software primacy proves decisive on
the battlefield.
The United States spends four
times as much as any other country to procure military systems, but price is a
poor metric for judging innovation power. In April 2022, Ukrainian forces fired
two Neptune missiles at the Moskva, a 600-foot Russian warship,
sinking the vessel. The ship cost $750 million; the missiles, $500,000 apiece.
Likewise, China’s state-of-the-art hypersonic antiship missile, the YJ-21,
could someday sink a $10 billion U.S. aircraft carrier. The U.S. government
should think twice before committing another $10 billion and ten years to such
a vessel. It often makes more sense to buy many low-cost items instead of
investing in a few high-ticket prestige projects.
PLAYING TO WIN
In the contest of the century —
the U.S. rivalry with China — the deciding factor will be innovation power.
Technological advances in the next five to ten years will determine which
country gains the upper hand in this world-shaping competition. The challenge
for the United States, however, is that government officials are incentivized
to avoid risk and focus on the short term, leaving the country to chronically
underinvest in the technologies of the future.
If necessity is the mother of invention,
war is the midwife of innovation. Speaking to Ukrainians on a visit to Kyiv in
the fall of 2022, I heard from many that the first months of the war were the
most productive of their lives. The United States’ last truly global war— World
War II — led to the widespread adoption of penicillin, a revolution in nuclear
technology, and a breakthrough in computer science. Now, the United States must
innovate in peacetime, faster than ever before. By failing to do so, it is
eroding its ability to deter — and, if necessary, to fight and win — the next
war.
The alternative could be
disastrous. Hypersonic missiles could leave the United States defenseless, and
cyberattacks could cripple the country’s electric grid. Perhaps even more
important, the warfare of the future will target individuals in completely new
ways: authoritarian states such as China and Russia may be able to collect
individual data on Americans’ shopping habits, location, and even DNA profiles,
allowing for tailor-made disinformation campaigns and even targeted biological
attacks and assassinations. To avert these horrors, the United States needs to
make sure it remains ahead of its technological competitors.
The principles that have defined
life in the United States — freedom, capitalism, individual effort - were the
right ones for the past and remain so for the future. These basic values lie at
the foundation of an innovation ecosystem that is still the envy of the world.
They have enabled breakthroughs that have transformed everyday life around the
world. The United States started the innovation race in pole position, but it
cannot rest assured it will remain there. Silicon Valley’s old mantra holds
true not just in industry but also in geopolitics: innovate or die.
February 28, 2023 - Eric Schmidt,
Chair of the Special Competitive Studies Project/former
CEO and Chair of Google
© 2023 by the Council
on Foreign Relations, Inc.