Déjà vu All Over Again

By Dee Smith

With his entry into the Israel-Iran war, Donald Trump seems to have gone over to neoconservatism, even invoking the goal of regime change, an old neocon favorite. It remains to be seen at this writing what will happen to the cease-fire he has imposed, but the interesting thing from a policy standpoint is how much this is both in accordance with — and violates — legacy patterns of US foreign policy.

Many Iranians outside Iran are pleased at Trump’s decision, even as they are desperately concerned about their families who remain there. Anne Applebaum cites an article from an anonymous Iranian source published last weekend in Persuasion:

knowing that the men who’ve held us hostage for forty-six years, who’ve ransacked our country, raped and killed our daughters and executed our men for asking for their basic human rights, are finally getting what they deserve—that brings me peace.

That view of the recent American action comes very close a classic element of the liberal international order in its later form: the “Responsibility to Protect” or R2P. Under this doctrine, the international community has a responsibility to intervene inside states that do not protect their populations from atrocities such as war crimes or genocide.

All of this is to say — with apologies to Mark Twain — that reports of the death of neoconservatism and of the liberal international order have been greatly exaggerated. They are gone, but also not gone. They are there, but so radically mutating they are no longer themselves.

That is characteristic of our entire world today. We are living in a time in which ideologies are both more important than ever, and the varieties of thinking and expressing ideologies are more confused and at odds with one another than ever, and in which many people are not sure whether they actually believe what they claim to believe … or want to believe.

This multi-directional confusion is characteristic of most elements of global society and culture: Multiple ideas, trends, and styles from the past are reinvoked and mixed together, often haphazardly. This extends to culture, both popular and “elevated.” It has been said that there is no direction in fashion today: you can wear whatever you want. This is also true in the visual arts. And “serious” or classical music currently includes almost any style—you can compose like Bach, Schumann, Ravel, Prokofiev, Stockhausen, or Glass and be taken seriously, and you can even mix those up in the same piece and get away with it. Beyond that, the lines dividing classical and popular music are dissolving. And popular music has 1001 idioms, genres, and styles, not to mention the almost uncountable “mash-ups.” Really, anything goes.

That is also true in philosophy and even in science, as new and resuscitated interpretations of new and old discoveries create visions and theories that are directly at odds with one another — in areas ranging from particle physics to vaccination science to the study of the nature of consciousness (which is of vital interest to AI) — all claiming to be supported by evidence and each taken seriously by knowledgeable people. It is certainly true in politics, ethics, behavior, and mores. There is simply no overall direction, and certainly no center. That is always true to a degree, but it is much, much more pronounced now.

It is all of a piece only by virtue of being, as Elvis Presley said, “all shook up.”

Some see this as a form of decadence. But it also represents a flailing about to try to find something that works … anything … in the radically divergent situations we face. We seem only to know how to look inside the old boxes we have, and they no longer contain anything fit for purpose. We are all, fearfully, practicing the politics of nostalgia. But the past does not work today, our current systems and ideas do not work, and we don’t see where a future lies that might work. We find ourselves at sea with no life-raft we can grab onto.

Sometimes this is called a “horizon problem” — meaning that the solution is over a horizon beyond which we cannot see from our present vantage point. During the energy crisis of 1979, President Jimmy Carter exaggerated when said we were in a civilizational crisis of confidence. That is no exaggeration today.

In Hemingway’s novel The Sun Also Rises, Mike Campbell answers the question of how he went bankrupt: “Two ways: Gradually, then suddenly.” This is how major change often happens. We would be wise to recall how quickly the Soviet Union fell in December 1991. It had seemed robust, threatening, and indeed almost impervious less than 5 years earlier, and looked reasonably secure even a few months before. But the decay had in fact been eating away at the system for decades.

The old Chinese curse, now repeated with tiresome regularity because it is so apropos to our day, says “may you live in interesting times.” We are indeed there.

Where will our situation lead? And how do we navigate it? These are among the most urgent questions for all of us today, and they extend across all the domains of life. If you have little idea where the future is heading, and you can’t rely on the elements you could in the past, then how do you prepare for it? How, for example, do you ensure the well-being of your family? How does an investor manage, let along hedge, a portfolio in circumstances like this? Aside from intensive vigilance, the ability and willingness to move quickly, and hope, it is very hard to answer these questions.

Writing in another tumultuous time at the end of the 17th century, the English poet John Dryden closed his Secular Masque with:

All, all of a piece throughout;

Thy chase had a beast in view;

Thy wars brought nothing about;

Thy lovers were all untrue.

'Tis well an old age is out,

And time to begin a new.

Investment and Race

Although US President Donald Trump once took some credit for popularizing Juneteenth (June 19), an official national holiday marking the freedom of enslaved Americans, celebrations this week were muted and in some cases canceled. Trump himself did not mention the holiday. Some US businesses also stepped back from it, although not with the speed with which they have moved away from DEI (diversity, equity, and inclusion) programs, which have been explicitly targeted by the Trump administration (see Signal, “The Rollback,” Feb. 28, 2025). It is not hard to tie this deprecation of Juneteenth to the argument that a form of white nationalism is backed by the White House. SIG’s view, however, is that the reality is more complicated, more politically opportunistic, and of more significance to American business.

The racial politics of the current moment seem to pivot around class and social mobility as much as physical appearance. It should be recalled that DEI efforts were losing popularity before President Trump took office, notably among nonwhite Americans. In the brief period from February 2023 to October 2024 (before Trump’s victory), according to Pew Research, Asian-American support for DEI programs at work went from 72% to 57%, while those with a neutral view rose from 18% to 28%, meaning that those Asian Americans who either opposed DEI or preferred not to venture an opinion had reached 43%. Unfortunately, Pew’s summary of the 2024 research did not highlight the same figures for Hispanic Americans, but in its February 2023 survey Hispanic support for DEI had been significantly weaker than Asian support. Given that the 2024 survey also found a broad decline across groups in support for DEI, it does seem unlikely that Hispanic support for it would have gone up while Asian support plummeted.

Dwindling non-white support for DEI might be related to views on the systemic or otherwise nature of racism in American society. The American Communities Project researches American views on a variety of topics based on a 15-part typology of communities, from Aging Farmlands (91% white, strongly Republican, with low unemployment and low education) to Hispanic Centers (more than 50% Hispanic, about evenly split between the two political parties, with low voter turnout and twice the national average of people lacking health insurance) to College Towns (younger, 78% white, 6% black, mildly Democratic) to the African American South (more than 40% black, 3% Hispanic, strongly but not overwhelmingly Democratic). The ACP also includes communities like Mormons (“LDS Enclaves”), Native American Lands, and Military Posts that rarely surface in statistical assessments of the national community. One can find fault with any of these categories but they have the virtue of complicating the straitjacket of race, income, and education.

One ACP question has been to ask whether you agree or disagree with the statement, “Racism is built into the American economy, government, and educational system.” Just 48% in Hispanic Centers agreed with that statement, a tie with Native American Lands. The lowest affirmative share was in Aging Farmlands (38%), the highest in the African American South and Big Cities (both 58%), College Towns (55%), Urban Suburbs (54%), and Military Posts (52%). The perception of systemic racism was highest in areas with large black populations — the US military is nearly twice as black as the national population — and large shares of better-off and better-educated Americans, the last two categories being disproportionately white although also disproportionately Asian. (Asian households are better educated and wealthier than any other racial or ethnic group in the US.) Unfortunately the ACP does not have an Asian community among the 15.

One can reach any number of conclusions from these surveys, including that white Americans do perceive systemic racism, and more so as they climb the social ladder — although there is also a clear partisan divide on how significant it is. The relationship to social mobility does seem relatively clear. In the ACP studies, Hispanic Centers were the community least likely (37%) to feel that “it is increasingly hard for someone like me to get ahead in America” and also the least likely of the 15 communities to agree (61%) that the US is in decline. In both cases, the community at the opposite end of the optimism spectrum was Evangelical Hubs (90% white, with income and education levels below the national averages, poor health care, and low voter turnout). This is the community that least sees itself as upwardly mobile.

In presidential races, the Republican coalition has, of course, become steadily more Asian, Hispanic, and black. (Asian voters in 2024 were 9% of the Republican coalition.) The Democratic candidates’ Asian support dropped from 74% to 61% from 2012 to 2024 nationally and 70% to 57% in 2024 battleground states. About the same pattern held nationally and in the 2024 battleground states for black and Hispanic voters. In a highly partisan political landscape, nonwhite voters, by leaving the Democratic party, have become a crucial swing vote.

If social mobility is a key factor, then these voting patterns might not be much affected by what happens with either DEI or Juneteenth. Republican politicians have consistently stressed that the United States is a land of opportunity more than their Democratic counterparts have. Hispanics and Asians disproportionately reach for that opportunity, far more than their white counterparts. The number of Hispanic-owned businesses grew 44% from 2018 to 2023 while the number of white-owned businesses slightly declined. Meanwhile, Asians, despite their lower numbers, owned more US businesses than Hispanics or African Americans, and had the largest estimated receipts ($1.2 trillion in 2022, the most recent year for which the census has public data).

At the same time, non-white businesses often do find it harder to attract investment than their white counterparts. A Stanford study argued that if “Latino-owned businesses had the same average revenue as white-owned businesses, it would add $1.1 trillion to the U.S. economy.” In short, there is an under-exploited investment opportunity in the non-white parts of the US economy. The Republican party, at times despite itself, discovered this opportunity in political terms. Investors could discover it in business terms as well.

Sputnik, AI, and the Nature of Victory

The US foreign-policy community has been gathering itself around the goal of winning the AI race against China. The problem is that defining “winning” is not at all easy. If winning consists of US companies, in cooperation with the US government, enjoying a monopoly on the best AI technology for some extended period — which does seem to be what is expected — SIG’s view is that winning is nearly impossible. The only way the US could come close is by sharing technology within some type of alliance. But that would entail non-American companies within the alliance having revenues and profits of their own. The US and US companies cannot “win” this alone.

As SIGnal has emphasized before, digital technology has been taking the world’s defense sectors by surprise for some 30 years. Whether it is low-earth-orbit satellite swarms, drones or navigational improvements, technology developed for one use becomes a military must-have for security uses. Proliferation is built into such a process. Military hardware needs software; software lends itself to proliferation, theft, imitation, and improvement. Artificial-intelligence software is no different.

Containment of American AI within US boundaries goes against the nature of the 21st-century technology industry. Most innovation comes from the private sector, whose ability to maximize profit and minimize costs depends on a global marketplace for products and labor. The defense sector is not the private sector but a curious public-private blend. American defense companies do sell a lot to overseas customers, but the customer whose needs shape the greater part of production is the US government. Proliferation of American defense contractors’ products, including software and data, is carefully regulated. Workers need to get government clearances. Contracts have to conform to official bureaucratic standards. There is plenty of red tape. The payoff for defense companies has been the security of long-term contracts and a relatively high level of protection from competition — notably from foreign competition.  The main downside is that profits from such quasi-public business, in the absence of corruption and favoritism, are limited by the obligation of Congress to ensure that government is not over-spending. Innovation within the defense sector thus seems to come up against natural limits. That is not the case in the private sector, which is why so much military innovation comes from outside the defense sector and commonly occurs for reasons that have nothing to do with defense.

This is abundantly true of AI innovation. If the US government wanted to make AI innovation henceforth a government-controlled process, it would amount to turning AI companies into defense companies — which would remove much of their incentive for innovation, defeating the purpose of the exercise. It would not be much of a victory in the race for AI dominance.

By contrast, operating with trusted partner countries would have some of the advantages of globalization — multiple labor and consumer markets to choose from — while preserving the goal of excluding China and other antagonists. Of course, forming some sort of digital alliance structure has been a US goal since the middle of the first Trump administration. Results have been mixed. There has been a contradiction at their core: The US wants partners but insists on being the dominant one. That kind of dominance cannot work in the case of private-sector-led technology innovation.

Fortunately US tech companies, although in their own ways just as hungry for dominance as the US government, have become accustomed in the last decade to competing in markets with foreign companies and not always winning. They have invested huge amounts in overseas markets: to pay suppliers, establish their own production, or attract customers but also to take advantage of the huge and growing innovation ecology that exists outside the United States. And foreign governments and private competitors have gotten used to them as well. The degree to which US tech companies can be profitably active in non-American markets without dominating them is an example of a type of loose alliance. The struggle with China is an important shaping factor but it does not distort everything it touches.

Learning from the success of this private-sector-led approach to the US-China tech contest could lead to a public-sector variant that could help control AI proliferation while accepting that winning the AI race with China, in the winner-take-all sense, cannot be done. A different type of victory might be possible though. After all, when the US, following the Soviets’ shocking Sputnik launch in 1957, went all out to win “the space race” against the USSR, it did not so much prevail as demonstrate its ability to continue to innovate at a pace the Soviet Union could not match. The result, in 1975, was American and Soviet astronauts living together in the International Space Station (as Russians and Americans still do) and the growth of an international scientific subculture that played an important role in bringing the Soviet experiment in oppressive governance to a close.   

AI is Just a Tool

By Dee Smith

There are many problems with AI, some of which I will explore in future posts. But the most basic problem is that, as we have all experienced, computers break.

For computers to continue to run requires multiple people who are capable of fixing them, available all the time.

Remembering this, is it a good idea to give more aspects of our lives over to “intelligent” systems so undependable? The things we rely on to obtain the food we eat, the water we drink, and to make, manage, and spend our money? The systems we use to conduct business, to take care of our health, our critical infrastructure, and our national security?

We already do, of course, but the teams are in place to fix them when they malfunction.

The unreliability of computers is not a passing problem. Computer systems, considered as a whole, are scarcely more reliable now than they were 30 years ago. Hardware is somewhat more reliable, but software is increasingly complex, increasingly unpredictable (complex systems are inherently more unpredictable), and increasingly unreliable.

Relying on AI systems makes us vulnerable in several critical ways. First is their exposure to attack. To cite just one example: discovery of undetected flaws leading to “zero-day exploits” — criminal or terrorist attacks exploiting those flaws.

Second are the continuing “hallucinations” AI experiences, where it gives entirely wrong, and sometimes nonsensical, information, often for reasons computer scientists do not understand. What if it does this while managing an element of critical infrastructure and the problem is “inside” the system, where it cannot easily be detected or fixed?

Third, all computer systems are subject to severe malfunctions due to rare, but potentially catastrophic, single-event upsets (SEUs) or single-event errors (SEEs) caused by cosmic rays bombarding the earth.

Fourth is AI’s requirement for a vast and ever-increasing level of electrical power for operation.

The reason computer systems are so ubiquitous is, of course, money. This works in two ways: the money being made and the money being saved by replacing human laborers. From a social standpoint, the latter may well be a pyrrhic victory: displacing millions of people from their jobs creates a huge social cost, in real money.

Are computer systems, in general, more efficient than humans? There is no evidence that they are. Computers are able to crunch numbers within mathematical operations much faster than humans — although that is discounting the enormous calculational power of the brain of a human, let alone the brain of a bird or even an ant, doing everyday things. There is no real understanding of how these biological intelligent systems work. Computer systems seem more efficient only because of the extremely limited scope within which they are operating.

Consider two alternatives, at opposite ends of the spectrum. One is that computer systems, as they become more and more complex, also become more and more fragile. When a system related to food production, or finance, or national security breaks catastrophically somewhere, the failure cascades through the system.

What if systems could be made substantially more reliable? Perhaps some unforeseen breakthrough will dramatically improve their dependability. Then suppose, as some people insist (incorrectly to my thinking), that AI can and will progress to Artificial General Intelligence (AGI). Imagine that this results in a superhuman intelligence. It could be one that emerges at a critical-mass-type point, almost in an instant (this is called the “singularity” by AGI aficionados). Were this to happen, we have no way of knowing whether such an entity would be benign, neutral, or malicious to humans.

But if such an AGI is trained on the sum total of human knowledge and expression, then that AGI is going to be loaded with all the bad along with the good. Do we really want to live in a world governed by transcendently intelligent and powerful machines trained on the behavior of what are essentially clever, volatile, often enraged chimpanzees? (We share 98.4 percent of our DNA with chimps.) Watching any war movie, or really most any movie, would suggest we might not.

And if the AGI was not trained on human knowledge and culture, what would it be trained on?

Biological systems have had about 4 billion years of evolution on this planet to become reliably dependable in operation. They are generally able, as living systems, to survive constant bombardment by radiation from space, extreme temperatures, rapid changes in climate, changes in atmospheric chemistry — and most important, to survive without someone standing by to repair or reboot them. This is a property known as homeostasis. Life has evolved naturally over an immense period of time through adaptation: trial and error.

One the other hand, our computer systems — based on silicon, not carbon — do have a very fallible creator: us. And they have been around about 70 years, or about two-trillionths as long as biological systems.

The belief in the inevitable ascendence of AGI is an article of faith for many involved in the computer industry and for others outside the industry who uncritically accept this “techno-religious” belief system. In its more virulent forms, it is teleological: a burning faith in an inevitable direction of history, in which AGIs are the successors to humanity. And in which the sacred duty of computer scientists is to bring about the birth of this supremely intelligent “life” form.

If I had told you 30 years ago that you would have in your pocket a self-powered device the size of a pack of cards that could tell you how to drive, turn by turn, from your current address to a building in a city 1000 miles away, you would probably have thought that it must be intelligent to be able to do this.

Do you think of your smart phone that way today? My estimation is that this is how we will think of AI in 30 years: a useful, not entirely dependable tool. Nothing more.

The Rest Is Software

US President Donald Trump’s visit to the Persian Gulf brought the region back into the American camp on artificial intelligence. The White House’s cancellation of the Biden administration’s AI-diffusion regulation was well timed: the message of both the trip and the cancellation was that this administration will not draw distinctions, as its predecessor did, in advancing what Commerce Secretary Howard Lutnick called “Trump’s vision for US AI dominance.” The US is, in a sense, trying to de-regulate AI politically. Washington’s move to block AI regulation by US states is also part of this. In SIG’s view, whether such de-regulation will achieve the goal of AI dominance is a different question.

As with crypto, the current administration’s US’s bias with AI is to let the chips fall where they may, so to speak, while also aggressively using the power of the state — as investor, as enforcer, as customer — to secure American advantages. Trump’s experiences of being deplatformed by Big Tech must have shaped his views: bitterness over the suppression of conservative speech, alongside the supposed promotion of anti-conservative speech, has been a dominant note since his second inauguration. In this scenario, technology and tech innovation were shown not to be autonomous forces, proceeding according to their own logic, perhaps capable of being channeled but not of being controlled. Rather they were the effects of companies run by individuals who could be influenced. That was well within the comfort zone of a lifelong businessman. (See the tariff retaliation against Apple for relocating its China production in India rather than the US.) It is a pro-market perspective in a way, but with the market understood as a place for ruthless competition among a small number of unconstrained players rather than as a mechanism for maximizing the efficient distribution of capital and labor.

Similarly, the role of the state in this perspective is to personify the nation in unconstrained and ruthless competition among states for, in U.S. Commerce Secretary Lutnick’s term, “dominance.” President Trump’s appetite for military confrontation in his first term was low, and that seems to be carrying into his second term. His appetite for economic confrontation was relatively high in term one and has gone to a new level in term two. The tools of the state are the weapons he has for such confrontation. They are directed toward securing dominance. Trump is personifying the powerful idea of economic nationalism.

The difficulty, with regard to “US AI dominance,” is that the AI sector is not like other industrial or commercial sectors. The preferred means for dominating AI has been the control of hardware, as in export controls on leading-edge chips or chip-design lithography equipment. Biden’s AI-diffusion regulations, like his CHIPS Act and much else, were about the geopolitics of hardware distribution. President Trump has opened that floodgate. But once the hardware starts flowing and the data centers are built the rest is software, the diffusion of which is extremely hard to control. Software can be stolen or replicated; more important, it can be developed independently, as DeepSeek has shown. The supply of chips and what is necessary to manufacture them can be choked off, up to a point. The supply of engineers and software-engineering skills really cannot. It will be diffused regardless of what the US or China want.

Among other things, this means US AI dominance depends on the strength and autonomy of US universities, the freedom to innovate in the US tech sector independent of political agendas, the smooth functioning of open global markets, sensible market pricing of resource inputs, the reduction of obstacles to the cross-border movement of labor … all of which run contrary to current US policy.

The Gulf states are investing in US AI infrastructure on the way to building their own systems, which will have the capacity to become independent of US systems (see SIGnal, “The America Stack,” Feb. 5, 2025). The emiratis are not happily volunteering to be hostages to US AI dominance. They are seizing the opportunity to gain access to the best technology that will enable them to maximize their own sovereignty while positioning themselves to be a sort of port for the storage, manipulation, and distribution of data, just as Dubai’s port operates with coffee, tea, and so much else.

The pattern is similar elsewhere, although no one can direct capital with quite the speed, and in quite the volume, that the Gulf states bring to bear. Malaysia hesitated for a moment at new deals for Chinese technology when Washington threatened retaliation against states using Huawei’s latest AI chips, but in the end, the shape of AI is not going to be determined by hardware. The massive computing power required to participate in the search for the grail of Artificial General Intelligence (AGI) is indeed a hardware question, but for sub-AGI artificial intelligence, which might well prove to be most if not all of AI, hardware is only one factor. The rest is software. And US dominance of it is unlikely to be secured using the current means.