Sputnik, AI, and the Nature of Victory

The US foreign-policy community has been gathering itself around the goal of winning the AI race against China. The problem is that defining “winning” is not at all easy. If winning consists of US companies, in cooperation with the US government, enjoying a monopoly on the best AI technology for some extended period — which does seem to be what is expected — SIG’s view is that winning is nearly impossible. The only way the US could come close is by sharing technology within some type of alliance. But that would entail non-American companies within the alliance having revenues and profits of their own. The US and US companies cannot “win” this alone.

As SIGnal has emphasized before, digital technology has been taking the world’s defense sectors by surprise for some 30 years. Whether it is low-earth-orbit satellite swarms, drones or navigational improvements, technology developed for one use becomes a military must-have for security uses. Proliferation is built into such a process. Military hardware needs software; software lends itself to proliferation, theft, imitation, and improvement. Artificial-intelligence software is no different.

Containment of American AI within US boundaries goes against the nature of the 21st-century technology industry. Most innovation comes from the private sector, whose ability to maximize profit and minimize costs depends on a global marketplace for products and labor. The defense sector is not the private sector but a curious public-private blend. American defense companies do sell a lot to overseas customers, but the customer whose needs shape the greater part of production is the US government. Proliferation of American defense contractors’ products, including software and data, is carefully regulated. Workers need to get government clearances. Contracts have to conform to official bureaucratic standards. There is plenty of red tape. The payoff for defense companies has been the security of long-term contracts and a relatively high level of protection from competition — notably from foreign competition.  The main downside is that profits from such quasi-public business, in the absence of corruption and favoritism, are limited by the obligation of Congress to ensure that government is not over-spending. Innovation within the defense sector thus seems to come up against natural limits. That is not the case in the private sector, which is why so much military innovation comes from outside the defense sector and commonly occurs for reasons that have nothing to do with defense.

This is abundantly true of AI innovation. If the US government wanted to make AI innovation henceforth a government-controlled process, it would amount to turning AI companies into defense companies — which would remove much of their incentive for innovation, defeating the purpose of the exercise. It would not be much of a victory in the race for AI dominance.

By contrast, operating with trusted partner countries would have some of the advantages of globalization — multiple labor and consumer markets to choose from — while preserving the goal of excluding China and other antagonists. Of course, forming some sort of digital alliance structure has been a US goal since the middle of the first Trump administration. Results have been mixed. There has been a contradiction at their core: The US wants partners but insists on being the dominant one. That kind of dominance cannot work in the case of private-sector-led technology innovation.

Fortunately US tech companies, although in their own ways just as hungry for dominance as the US government, have become accustomed in the last decade to competing in markets with foreign companies and not always winning. They have invested huge amounts in overseas markets: to pay suppliers, establish their own production, or attract customers but also to take advantage of the huge and growing innovation ecology that exists outside the United States. And foreign governments and private competitors have gotten used to them as well. The degree to which US tech companies can be profitably active in non-American markets without dominating them is an example of a type of loose alliance. The struggle with China is an important shaping factor but it does not distort everything it touches.

Learning from the success of this private-sector-led approach to the US-China tech contest could lead to a public-sector variant that could help control AI proliferation while accepting that winning the AI race with China, in the winner-take-all sense, cannot be done. A different type of victory might be possible though. After all, when the US, following the Soviets’ shocking Sputnik launch in 1957, went all out to win “the space race” against the USSR, it did not so much prevail as demonstrate its ability to continue to innovate at a pace the Soviet Union could not match. The result, in 1975, was American and Soviet astronauts living together in the International Space Station (as Russians and Americans still do) and the growth of an international scientific subculture that played an important role in bringing the Soviet experiment in oppressive governance to a close.   

AI is Just a Tool

By Dee Smith

There are many problems with AI, some of which I will explore in future posts. But the most basic problem is that, as we have all experienced, computers break.

For computers to continue to run requires multiple people who are capable of fixing them, available all the time.

Remembering this, is it a good idea to give more aspects of our lives over to “intelligent” systems so undependable? The things we rely on to obtain the food we eat, the water we drink, and to make, manage, and spend our money? The systems we use to conduct business, to take care of our health, our critical infrastructure, and our national security?

We already do, of course, but the teams are in place to fix them when they malfunction.

The unreliability of computers is not a passing problem. Computer systems, considered as a whole, are scarcely more reliable now than they were 30 years ago. Hardware is somewhat more reliable, but software is increasingly complex, increasingly unpredictable (complex systems are inherently more unpredictable), and increasingly unreliable.

Relying on AI systems makes us vulnerable in several critical ways. First is their exposure to attack. To cite just one example: discovery of undetected flaws leading to “zero-day exploits” — criminal or terrorist attacks exploiting those flaws.

Second are the continuing “hallucinations” AI experiences, where it gives entirely wrong, and sometimes nonsensical, information, often for reasons computer scientists do not understand. What if it does this while managing an element of critical infrastructure and the problem is “inside” the system, where it cannot easily be detected or fixed?

Third, all computer systems are subject to severe malfunctions due to rare, but potentially catastrophic, single-event upsets (SEUs) or single-event errors (SEEs) caused by cosmic rays bombarding the earth.

Fourth is AI’s requirement for a vast and ever-increasing level of electrical power for operation.

The reason computer systems are so ubiquitous is, of course, money. This works in two ways: the money being made and the money being saved by replacing human laborers. From a social standpoint, the latter may well be a pyrrhic victory: displacing millions of people from their jobs creates a huge social cost, in real money.

Are computer systems, in general, more efficient than humans? There is no evidence that they are. Computers are able to crunch numbers within mathematical operations much faster than humans — although that is discounting the enormous calculational power of the brain of a human, let alone the brain of a bird or even an ant, doing everyday things. There is no real understanding of how these biological intelligent systems work. Computer systems seem more efficient only because of the extremely limited scope within which they are operating.

Consider two alternatives, at opposite ends of the spectrum. One is that computer systems, as they become more and more complex, also become more and more fragile. When a system related to food production, or finance, or national security breaks catastrophically somewhere, the failure cascades through the system.

What if systems could be made substantially more reliable? Perhaps some unforeseen breakthrough will dramatically improve their dependability. Then suppose, as some people insist (incorrectly to my thinking), that AI can and will progress to Artificial General Intelligence (AGI). Imagine that this results in a superhuman intelligence. It could be one that emerges at a critical-mass-type point, almost in an instant (this is called the “singularity” by AGI aficionados). Were this to happen, we have no way of knowing whether such an entity would be benign, neutral, or malicious to humans.

But if such an AGI is trained on the sum total of human knowledge and expression, then that AGI is going to be loaded with all the bad along with the good. Do we really want to live in a world governed by transcendently intelligent and powerful machines trained on the behavior of what are essentially clever, volatile, often enraged chimpanzees? (We share 98.4 percent of our DNA with chimps.) Watching any war movie, or really most any movie, would suggest we might not.

And if the AGI was not trained on human knowledge and culture, what would it be trained on?

Biological systems have had about 4 billion years of evolution on this planet to become reliably dependable in operation. They are generally able, as living systems, to survive constant bombardment by radiation from space, extreme temperatures, rapid changes in climate, changes in atmospheric chemistry — and most important, to survive without someone standing by to repair or reboot them. This is a property known as homeostasis. Life has evolved naturally over an immense period of time through adaptation: trial and error.

One the other hand, our computer systems — based on silicon, not carbon — do have a very fallible creator: us. And they have been around about 70 years, or about two-trillionths as long as biological systems.

The belief in the inevitable ascendence of AGI is an article of faith for many involved in the computer industry and for others outside the industry who uncritically accept this “techno-religious” belief system. In its more virulent forms, it is teleological: a burning faith in an inevitable direction of history, in which AGIs are the successors to humanity. And in which the sacred duty of computer scientists is to bring about the birth of this supremely intelligent “life” form.

If I had told you 30 years ago that you would have in your pocket a self-powered device the size of a pack of cards that could tell you how to drive, turn by turn, from your current address to a building in a city 1000 miles away, you would probably have thought that it must be intelligent to be able to do this.

Do you think of your smart phone that way today? My estimation is that this is how we will think of AI in 30 years: a useful, not entirely dependable tool. Nothing more.

The Rest Is Software

US President Donald Trump’s visit to the Persian Gulf brought the region back into the American camp on artificial intelligence. The White House’s cancellation of the Biden administration’s AI-diffusion regulation was well timed: the message of both the trip and the cancellation was that this administration will not draw distinctions, as its predecessor did, in advancing what Commerce Secretary Howard Lutnick called “Trump’s vision for US AI dominance.” The US is, in a sense, trying to de-regulate AI politically. Washington’s move to block AI regulation by US states is also part of this. In SIG’s view, whether such de-regulation will achieve the goal of AI dominance is a different question.

As with crypto, the current administration’s US’s bias with AI is to let the chips fall where they may, so to speak, while also aggressively using the power of the state — as investor, as enforcer, as customer — to secure American advantages. Trump’s experiences of being deplatformed by Big Tech must have shaped his views: bitterness over the suppression of conservative speech, alongside the supposed promotion of anti-conservative speech, has been a dominant note since his second inauguration. In this scenario, technology and tech innovation were shown not to be autonomous forces, proceeding according to their own logic, perhaps capable of being channeled but not of being controlled. Rather they were the effects of companies run by individuals who could be influenced. That was well within the comfort zone of a lifelong businessman. (See the tariff retaliation against Apple for relocating its China production in India rather than the US.) It is a pro-market perspective in a way, but with the market understood as a place for ruthless competition among a small number of unconstrained players rather than as a mechanism for maximizing the efficient distribution of capital and labor.

Similarly, the role of the state in this perspective is to personify the nation in unconstrained and ruthless competition among states for, in U.S. Commerce Secretary Lutnick’s term, “dominance.” President Trump’s appetite for military confrontation in his first term was low, and that seems to be carrying into his second term. His appetite for economic confrontation was relatively high in term one and has gone to a new level in term two. The tools of the state are the weapons he has for such confrontation. They are directed toward securing dominance. Trump is personifying the powerful idea of economic nationalism.

The difficulty, with regard to “US AI dominance,” is that the AI sector is not like other industrial or commercial sectors. The preferred means for dominating AI has been the control of hardware, as in export controls on leading-edge chips or chip-design lithography equipment. Biden’s AI-diffusion regulations, like his CHIPS Act and much else, were about the geopolitics of hardware distribution. President Trump has opened that floodgate. But once the hardware starts flowing and the data centers are built the rest is software, the diffusion of which is extremely hard to control. Software can be stolen or replicated; more important, it can be developed independently, as DeepSeek has shown. The supply of chips and what is necessary to manufacture them can be choked off, up to a point. The supply of engineers and software-engineering skills really cannot. It will be diffused regardless of what the US or China want.

Among other things, this means US AI dominance depends on the strength and autonomy of US universities, the freedom to innovate in the US tech sector independent of political agendas, the smooth functioning of open global markets, sensible market pricing of resource inputs, the reduction of obstacles to the cross-border movement of labor … all of which run contrary to current US policy.

The Gulf states are investing in US AI infrastructure on the way to building their own systems, which will have the capacity to become independent of US systems (see SIGnal, “The America Stack,” Feb. 5, 2025). The emiratis are not happily volunteering to be hostages to US AI dominance. They are seizing the opportunity to gain access to the best technology that will enable them to maximize their own sovereignty while positioning themselves to be a sort of port for the storage, manipulation, and distribution of data, just as Dubai’s port operates with coffee, tea, and so much else.

The pattern is similar elsewhere, although no one can direct capital with quite the speed, and in quite the volume, that the Gulf states bring to bear. Malaysia hesitated for a moment at new deals for Chinese technology when Washington threatened retaliation against states using Huawei’s latest AI chips, but in the end, the shape of AI is not going to be determined by hardware. The massive computing power required to participate in the search for the grail of Artificial General Intelligence (AGI) is indeed a hardware question, but for sub-AGI artificial intelligence, which might well prove to be most if not all of AI, hardware is only one factor. The rest is software. And US dominance of it is unlikely to be secured using the current means.

The Defense Industry's New Math

Global military spending in 2024 hit a record that will be broken in 2025. Much of the growth comes from the US (which just announced a goal of a $1 trillion defense budget) and its adversaries, but an important part is from US allies that feel they can no longer rely on US security guarantees. For that reason, they seek to build their own defense industrial bases rather than simply buy more American military products. There are opportunities for investors in this global proliferation of military production financed by government budgets, although the peculiarities of military industries make it more important than usual to have the right expertise. Defense-sector exchange-traded funds (ETFs) have, not surprisingly, boomed: the VanEck Defense UCITS Took in $1 billion in March 2025 alone.

In 2024 global military spending hit $2,718 billion, a 9.4% increase over 2023 and the steepest year-on-year rise since the end of the Cold War. The main drivers were the conflicts in Ukraine and Gaza. Israel’s spending increased 65%, to $46.5 billion, which represented 8.78% of GDP, the second highest ratio after Ukraine — which spent nearly 35% of GDP on its military. Russia spent $149 billion, up 28% from 2023 and representing 7.1% of GDP and 19% of total government spending. German spending surged to $88.5 billion, the fourth largest total in the world after the US, China, and Russia, and just ahead of India at $86.1 billion.

All of these numbers are likely to grow in 2025 and into 2026, except perhaps in Ukraine, which might not be able to get above 35% of GDP. But the Ukraine example illustrates a different and more interesting dynamic. According to one report by a former Ukrainian official, Ukraine’s domestic defense sector has grown from $1 billion to $35 billion in just three years. It now produces about a third of Ukraine’s weapons and ammunition, and nearly all of its drones. That is not nearly enough to protect itself against the Russian army, but it is enough to ease some of the country’s dependence on the US

Similarly, Germany in particular, but also France and the European Union, have entered a new era in terms of domestic military production. Germany’s head of state, Friedrich Merz, won a parliamentary vote in March to not apply Germany’s “debt brake” policy to the defense sector. Merz also appealed to the EU to exempt defense production from its own spending rules. (EU member states have their own military budgets but the EU has rules on public debt.) Sixteen of the Union’s 27 members are seeking exemptions from the EU rules so they can increase their defense spending.

What is driving all this spending is principally the desire to, as Merz puts it, “achieve independence from the USA,” which under President Trump he sees as “largely indifferent to the fate of Europe.” EU Commission President Ursula von der Leyen, herself a former German defense minister, declared, “We are in an era of rearmament,” one that requires Europeans to construct their own defense as part of what France’s President Macron refers to as “strategic autonomy” from the US. The EU hopes that new bloc-wide procurement policies will strengthen European defense production at the cost of American materiel.

There is irony in the fact that European NATO members in recent years have spent more, not less, on weaponry produced in the US: from 52% of spending in 2015-19 to 64% in 2020-24. But that very dependence is why traditional US allies are so focused on independence from the US now that the US has abandoned its traditional approach to alliances. It is not just Europe. South Korea has been trying to replace US purchases with its own production for several years, including so that it might export weapons. Japan also seeks to increase domestic military industries. Israel is striving for self-sufficiency in bomb production. Even Australia has been trying to be more militarily independent, although in practice Australian defense production, current and projected, is commonly done jointly with US defense primes.

The proliferation of defense production in a globalized world can lead to curiosities, such as the battle between a Chinese state-controlled defense company and an Australian to buy a troubled Brazilian manufacturer. That in turn points to both the internationalization of military production and the question of what gets done with the products. US military industries and the US military itself have always advanced together. Foreign military sales were integrated into a much larger public-private strategy that was rooted in political alliances. The point was not to sell to enemies. The proliferation of military-industrial production in the past three years suggests a future in which weapons will be available from many sellers, including NATO members, with little or no reference to US policy guidance.

In short, the desire for autonomy from the US is driving a global surge in weapons production that will in turn lead to weapons proliferation on an unprecedented scale. Unless there is a significant increase in war, there will be an increase in excess production. Excess production will need to be off-loaded somewhere. This is the peculiarity of defense production. If you are not simply stockpiling — which is a dead weight on the economy — then you are proliferating. Weaponry ETFs in this scenario would have to be a short-term play. The longer-term returns will be in companies that aim not just at domestic production but at export.