AI is Just a Tool

By Dee Smith

There are many problems with AI, some of which I will explore in future posts. But the most basic problem is that, as we have all experienced, computers break.

For computers to continue to run requires multiple people who are capable of fixing them, available all the time.

Remembering this, is it a good idea to give more aspects of our lives over to “intelligent” systems so undependable? The things we rely on to obtain the food we eat, the water we drink, and to make, manage, and spend our money? The systems we use to conduct business, to take care of our health, our critical infrastructure, and our national security?

We already do, of course, but the teams are in place to fix them when they malfunction.

The unreliability of computers is not a passing problem. Computer systems, considered as a whole, are scarcely more reliable now than they were 30 years ago. Hardware is somewhat more reliable, but software is increasingly complex, increasingly unpredictable (complex systems are inherently more unpredictable), and increasingly unreliable.

Relying on AI systems makes us vulnerable in several critical ways. First is their exposure to attack. To cite just one example: discovery of undetected flaws leading to “zero-day exploits” — criminal or terrorist attacks exploiting those flaws.

Second are the continuing “hallucinations” AI experiences, where it gives entirely wrong, and sometimes nonsensical, information, often for reasons computer scientists do not understand. What if it does this while managing an element of critical infrastructure and the problem is “inside” the system, where it cannot easily be detected or fixed?

Third, all computer systems are subject to severe malfunctions due to rare, but potentially catastrophic, single-event upsets (SEUs) or single-event errors (SEEs) caused by cosmic rays bombarding the earth.

Fourth is AI’s requirement for a vast and ever-increasing level of electrical power for operation.

The reason computer systems are so ubiquitous is, of course, money. This works in two ways: the money being made and the money being saved by replacing human laborers. From a social standpoint, the latter may well be a pyrrhic victory: displacing millions of people from their jobs creates a huge social cost, in real money.

Are computer systems, in general, more efficient than humans? There is no evidence that they are. Computers are able to crunch numbers within mathematical operations much faster than humans — although that is discounting the enormous calculational power of the brain of a human, let alone the brain of a bird or even an ant, doing everyday things. There is no real understanding of how these biological intelligent systems work. Computer systems seem more efficient only because of the extremely limited scope within which they are operating.

Consider two alternatives, at opposite ends of the spectrum. One is that computer systems, as they become more and more complex, also become more and more fragile. When a system related to food production, or finance, or national security breaks catastrophically somewhere, the failure cascades through the system.

What if systems could be made substantially more reliable? Perhaps some unforeseen breakthrough will dramatically improve their dependability. Then suppose, as some people insist (incorrectly to my thinking), that AI can and will progress to Artificial General Intelligence (AGI). Imagine that this results in a superhuman intelligence. It could be one that emerges at a critical-mass-type point, almost in an instant (this is called the “singularity” by AGI aficionados). Were this to happen, we have no way of knowing whether such an entity would be benign, neutral, or malicious to humans.

But if such an AGI is trained on the sum total of human knowledge and expression, then that AGI is going to be loaded with all the bad along with the good. Do we really want to live in a world governed by transcendently intelligent and powerful machines trained on the behavior of what are essentially clever, volatile, often enraged chimpanzees? (We share 98.4 percent of our DNA with chimps.) Watching any war movie, or really most any movie, would suggest we might not.

And if the AGI was not trained on human knowledge and culture, what would it be trained on?

Biological systems have had about 4 billion years of evolution on this planet to become reliably dependable in operation. They are generally able, as living systems, to survive constant bombardment by radiation from space, extreme temperatures, rapid changes in climate, changes in atmospheric chemistry — and most important, to survive without someone standing by to repair or reboot them. This is a property known as homeostasis. Life has evolved naturally over an immense period of time through adaptation: trial and error.

One the other hand, our computer systems — based on silicon, not carbon — do have a very fallible creator: us. And they have been around about 70 years, or about two-trillionths as long as biological systems.

The belief in the inevitable ascendence of AGI is an article of faith for many involved in the computer industry and for others outside the industry who uncritically accept this “techno-religious” belief system. In its more virulent forms, it is teleological: a burning faith in an inevitable direction of history, in which AGIs are the successors to humanity. And in which the sacred duty of computer scientists is to bring about the birth of this supremely intelligent “life” form.

If I had told you 30 years ago that you would have in your pocket a self-powered device the size of a pack of cards that could tell you how to drive, turn by turn, from your current address to a building in a city 1000 miles away, you would probably have thought that it must be intelligent to be able to do this.

Do you think of your smart phone that way today? My estimation is that this is how we will think of AI in 30 years: a useful, not entirely dependable tool. Nothing more.

The Rest Is Software

US President Donald Trump’s visit to the Persian Gulf brought the region back into the American camp on artificial intelligence. The White House’s cancellation of the Biden administration’s AI-diffusion regulation was well timed: the message of both the trip and the cancellation was that this administration will not draw distinctions, as its predecessor did, in advancing what Commerce Secretary Howard Lutnick called “Trump’s vision for US AI dominance.” The US is, in a sense, trying to de-regulate AI politically. Washington’s move to block AI regulation by US states is also part of this. In SIG’s view, whether such de-regulation will achieve the goal of AI dominance is a different question.

As with crypto, the current administration’s US’s bias with AI is to let the chips fall where they may, so to speak, while also aggressively using the power of the state — as investor, as enforcer, as customer — to secure American advantages. Trump’s experiences of being deplatformed by Big Tech must have shaped his views: bitterness over the suppression of conservative speech, alongside the supposed promotion of anti-conservative speech, has been a dominant note since his second inauguration. In this scenario, technology and tech innovation were shown not to be autonomous forces, proceeding according to their own logic, perhaps capable of being channeled but not of being controlled. Rather they were the effects of companies run by individuals who could be influenced. That was well within the comfort zone of a lifelong businessman. (See the tariff retaliation against Apple for relocating its China production in India rather than the US.) It is a pro-market perspective in a way, but with the market understood as a place for ruthless competition among a small number of unconstrained players rather than as a mechanism for maximizing the efficient distribution of capital and labor.

Similarly, the role of the state in this perspective is to personify the nation in unconstrained and ruthless competition among states for, in U.S. Commerce Secretary Lutnick’s term, “dominance.” President Trump’s appetite for military confrontation in his first term was low, and that seems to be carrying into his second term. His appetite for economic confrontation was relatively high in term one and has gone to a new level in term two. The tools of the state are the weapons he has for such confrontation. They are directed toward securing dominance. Trump is personifying the powerful idea of economic nationalism.

The difficulty, with regard to “US AI dominance,” is that the AI sector is not like other industrial or commercial sectors. The preferred means for dominating AI has been the control of hardware, as in export controls on leading-edge chips or chip-design lithography equipment. Biden’s AI-diffusion regulations, like his CHIPS Act and much else, were about the geopolitics of hardware distribution. President Trump has opened that floodgate. But once the hardware starts flowing and the data centers are built the rest is software, the diffusion of which is extremely hard to control. Software can be stolen or replicated; more important, it can be developed independently, as DeepSeek has shown. The supply of chips and what is necessary to manufacture them can be choked off, up to a point. The supply of engineers and software-engineering skills really cannot. It will be diffused regardless of what the US or China want.

Among other things, this means US AI dominance depends on the strength and autonomy of US universities, the freedom to innovate in the US tech sector independent of political agendas, the smooth functioning of open global markets, sensible market pricing of resource inputs, the reduction of obstacles to the cross-border movement of labor … all of which run contrary to current US policy.

The Gulf states are investing in US AI infrastructure on the way to building their own systems, which will have the capacity to become independent of US systems (see SIGnal, “The America Stack,” Feb. 5, 2025). The emiratis are not happily volunteering to be hostages to US AI dominance. They are seizing the opportunity to gain access to the best technology that will enable them to maximize their own sovereignty while positioning themselves to be a sort of port for the storage, manipulation, and distribution of data, just as Dubai’s port operates with coffee, tea, and so much else.

The pattern is similar elsewhere, although no one can direct capital with quite the speed, and in quite the volume, that the Gulf states bring to bear. Malaysia hesitated for a moment at new deals for Chinese technology when Washington threatened retaliation against states using Huawei’s latest AI chips, but in the end, the shape of AI is not going to be determined by hardware. The massive computing power required to participate in the search for the grail of Artificial General Intelligence (AGI) is indeed a hardware question, but for sub-AGI artificial intelligence, which might well prove to be most if not all of AI, hardware is only one factor. The rest is software. And US dominance of it is unlikely to be secured using the current means.

The Defense Industry's New Math

Global military spending in 2024 hit a record that will be broken in 2025. Much of the growth comes from the US (which just announced a goal of a $1 trillion defense budget) and its adversaries, but an important part is from US allies that feel they can no longer rely on US security guarantees. For that reason, they seek to build their own defense industrial bases rather than simply buy more American military products. There are opportunities for investors in this global proliferation of military production financed by government budgets, although the peculiarities of military industries make it more important than usual to have the right expertise. Defense-sector exchange-traded funds (ETFs) have, not surprisingly, boomed: the VanEck Defense UCITS Took in $1 billion in March 2025 alone.

In 2024 global military spending hit $2,718 billion, a 9.4% increase over 2023 and the steepest year-on-year rise since the end of the Cold War. The main drivers were the conflicts in Ukraine and Gaza. Israel’s spending increased 65%, to $46.5 billion, which represented 8.78% of GDP, the second highest ratio after Ukraine — which spent nearly 35% of GDP on its military. Russia spent $149 billion, up 28% from 2023 and representing 7.1% of GDP and 19% of total government spending. German spending surged to $88.5 billion, the fourth largest total in the world after the US, China, and Russia, and just ahead of India at $86.1 billion.

All of these numbers are likely to grow in 2025 and into 2026, except perhaps in Ukraine, which might not be able to get above 35% of GDP. But the Ukraine example illustrates a different and more interesting dynamic. According to one report by a former Ukrainian official, Ukraine’s domestic defense sector has grown from $1 billion to $35 billion in just three years. It now produces about a third of Ukraine’s weapons and ammunition, and nearly all of its drones. That is not nearly enough to protect itself against the Russian army, but it is enough to ease some of the country’s dependence on the US

Similarly, Germany in particular, but also France and the European Union, have entered a new era in terms of domestic military production. Germany’s head of state, Friedrich Merz, won a parliamentary vote in March to not apply Germany’s “debt brake” policy to the defense sector. Merz also appealed to the EU to exempt defense production from its own spending rules. (EU member states have their own military budgets but the EU has rules on public debt.) Sixteen of the Union’s 27 members are seeking exemptions from the EU rules so they can increase their defense spending.

What is driving all this spending is principally the desire to, as Merz puts it, “achieve independence from the USA,” which under President Trump he sees as “largely indifferent to the fate of Europe.” EU Commission President Ursula von der Leyen, herself a former German defense minister, declared, “We are in an era of rearmament,” one that requires Europeans to construct their own defense as part of what France’s President Macron refers to as “strategic autonomy” from the US. The EU hopes that new bloc-wide procurement policies will strengthen European defense production at the cost of American materiel.

There is irony in the fact that European NATO members in recent years have spent more, not less, on weaponry produced in the US: from 52% of spending in 2015-19 to 64% in 2020-24. But that very dependence is why traditional US allies are so focused on independence from the US now that the US has abandoned its traditional approach to alliances. It is not just Europe. South Korea has been trying to replace US purchases with its own production for several years, including so that it might export weapons. Japan also seeks to increase domestic military industries. Israel is striving for self-sufficiency in bomb production. Even Australia has been trying to be more militarily independent, although in practice Australian defense production, current and projected, is commonly done jointly with US defense primes.

The proliferation of defense production in a globalized world can lead to curiosities, such as the battle between a Chinese state-controlled defense company and an Australian to buy a troubled Brazilian manufacturer. That in turn points to both the internationalization of military production and the question of what gets done with the products. US military industries and the US military itself have always advanced together. Foreign military sales were integrated into a much larger public-private strategy that was rooted in political alliances. The point was not to sell to enemies. The proliferation of military-industrial production in the past three years suggests a future in which weapons will be available from many sellers, including NATO members, with little or no reference to US policy guidance.

In short, the desire for autonomy from the US is driving a global surge in weapons production that will in turn lead to weapons proliferation on an unprecedented scale. Unless there is a significant increase in war, there will be an increase in excess production. Excess production will need to be off-loaded somewhere. This is the peculiarity of defense production. If you are not simply stockpiling — which is a dead weight on the economy — then you are proliferating. Weaponry ETFs in this scenario would have to be a short-term play. The longer-term returns will be in companies that aim not just at domestic production but at export.

Can AI Make a Country Great Again?

Much recent commentary on artificial intelligence (AI) has focused on the prospect of a company or a country winning a race for artificial general intelligence (AGI) or more-than-human “superintelligence.” However, that goal, which seems rather more religious than technological, is both elusive and, should it ever be achieved, fragile (see SIGnal, “Mutual Assured Malfunction,” March 13, 2025). Investors are focusing instead on “little tech” and firm-level or industry-level AI that uses specific data sets to engineer specific productivity gains. In SIG’s view, this more modest course seems both economically more promising and politically much more sustainable. But it definitely does have risks of its own.

The appeal of “little tech” AI is partly that it leaves to one side the many serious questions about data privacy and other more existential matters that are posed by AGI. Smaller AI systems can run on the contained, often proprietary data sets involved in industrial processes, especially in manufacturing. The goal is not to replicate the human brain but to make industrial processes more efficient, raising productivity. It is a type of automation, using new technology yet still familiar enough from the history of industrial production.

With little-tech AI, startups can focus on specific problems whose solutions will provide a payoff in the relatively short term. In other words, AI would be monetizable. This has an obvious appeal not just to startup investors but also to industrial incumbents whose processes would be improved and whose productivity would be raised in competition with their rivals. Startups are not alone in this sphere. The German giant Siemens, for example, has put industrial AI at the core of its offering.

Politically, this approach to AI is much more appealing to most governments, only a few of which (the US, China) can have much hope of achieving global dominance by winning a race for AGI, at which point they might well regret getting what they wished for. Leaving aside the large question of AI data-center electricity demands, it offers the attractive prospect of raising productivity while reducing carbon use — because your factory in Texas, enhanced by AI, will no longer have to source so many of its components from East Asia, with all the carbon-using transport that entails. The little-AI approach also means states would not have to expose their citizens’ data to foreign tech multinationals, possibly based in hostile or overweening states, in order to participate in the later 21st century. That would be a gain for state sovereignty; and given that so many of the tensions around globalization have had to do with the way it threatens sovereignty and the democratic (or otherwise) accountability of governments to citizens, the little-AI approach could conceivably enhance global stability and the prospects for peace. Little AI, by improving productivity within a given national domestic workforce, could help states that are facing demographic stagnation — which is pretty much all industrialized states and many less-industrialized ones — to nonetheless grow on the basis of domestic labor (see SIGnal, “AI Family Values,” May 3, 2024). As Marc Andreessen and Ben Horowitz wrote in July 2024, “little tech” could make it possible “to reconstruct the American manufacturing sector around automation and AI, reshoring entire industries and creating millions of new middle class jobs” while also having green benefits. Technology could, in effect, provide the “labor” that would solve the biggest challenge facing President Trump’s vision of a more self-sufficient US: the lack of workers operating at a sufficient level of productivity (see SIGnal, “Trade Wars and US Labor,” April 11 2025).

Less carbon use, stabilization of the international sovereign-state system, a growing middle class, a renewal of rich-world domestic manufacturing but with higher wages and less grim manual work…What could possibly go wrong?

AI-enhanced production aimed at reshoring manufacturing to high-wage economies would square the circle of productivity growth and de-globalization. It would revive the pre-1975 global industrial status quo with the crucial addition of China (but not so much India or Southeast Asia). If you have the good fortune to live and work in a benefitting state, this would be a positive outcome. It could, however, also fuel techno-nationalism in the rich world (plus China) and make growth outside the AI-enhanced nations highly problematic. One key issue raised by the US-China struggle — a protected US market deprives non-American producers of consumers, while a protected Chinese economy, likewise deprived, dumps its production for the pre-tariffs US market onto the rest of the world’s economies — would be gravely worsened as the world’s two largest economies reduce their dependence on the rest of the world for both supply and demand.

AI-enhanced de-globalization could, in short, reverse the global redistribution of labor productivity that led to the greatest poverty reduction in human history. In theory, the gains from little AI could be more equally distributed. After all, the AI enhancements that would lift an underemployed person in Oklahoma or eastern Germany into the middle class of his or her domestic economy could do the same for a person in Nigeria or Thailand. But that outcome is not the goal for the people, states, and companies that are driving the growth in AI monetization. Their goal is nearly the opposite. For investors, the greatest gains will come from identifying companies and sectors best positioned to gain from AI-enhanced de-globalization.