ChatWars

The showdown between the US Department of War and AI giant Anthropic over the past week has focused mainly on the private-public struggle over power: Can the federal government, as the Trump administration maintains, force a private company to work with the government on the government’s terms? Or do private companies have a final say over government use of their products? SIG’s view is that underlying this moral-political debate is a deeper question having to do with labor costs. As SIGnal has argued in different contexts over the past few years, if AI is “about” anything it is about the cost of labor. Government has long struggled with its inability to compete with Silicon Valley on wages. The AI tools built by Anthropic, OpenAI, GrokAI and Google, among others, offer a way to ease that problem, just as they offer ways for private companies to improve labor productivity. The issue is not private-sector patriotism. The issue is labor.

The struggle over who shapes the direction of digital innovation is as old as digital technology itself, and indeed dates back to pre-digital innovation during World War I around the challenges of “fire control” — improving the accuracy of munitions delivery. Many of the key figures in developing digital technology for use in war, such as Norbert Wiener and Vannevar Bush, cut their teeth on the challenges of fire control in the First World War. The commercial-academic-military cooperation that birthed the digital age in World War II has its own distinctive and peculiar history. (The venture-capital model was also rooted in wartime procurement practices, but that is another story.) The US’s extraordinary wartime spending after 1942 went on to combine with Cold War fears of Soviet technological competition. Government money financed digital development, and thereby shaped its purposes, into the 1970s.  When the government’s Internet project became commercializable beyond any expectation, the government yielded to private-sector leadership in shaping the digital landscape — and government, including the military and intelligence sectors, fell behind.

The turning point can usefully be dated to DoD’s Project Maven, undertaken in cooperation with Google. The spur for Project Maven was a labor shortage of sorts: there was too much targeting data — fire control again — for military employees to keep up with. Some Google engineers rebelled in 2018 when they discovered their engineering skills were essentially being used to crunch data for the better delivery of firepower. Google eventually canceled the contract.

The story, of course, did not end there. With China playing the old role of the Soviet Union, all branches of the US military began investing heavily in digital automation of defense processes. At the same time, many in Silicon Valley (plus Seattle) came to terms with what it actually meant to work as contractors for the military and other government agencies. Government still struggled to compete for tech talent in its own hiring, but contracting provided ways forward.

Enter AI. Even as Maven was hitting a wall, Google announced major AI innovations: the Transformer architecture (2017) and BERT (Bidirectional Encoder Representations from Transformers, 2018). OpenAI launched GPT1 (Generative Pre-trained Transformer 1) in 2018. The rest is recent history. OpenAI launched ChatGPT in November 2022 with an interface that enabled normal people to use the new tool. It could achieve computation at levels that far exceeded what Maven could do four years before. But the basic Maven challenge — there weren’t enough people to handle all the data, so they needed a machine to help them — was the same.

The massive flow of capital into AI-related investments was never based on military contracts or improved fire control. Nonetheless, at one level the military’s interest in improving targeting without hiring more targeters was identical to private companies’ interest in increasing productivity without hiring more workers. It was and is a short step from there to increasing productivity with fewer and fewer workers. This week, a Pentagon official announced that an agreement had been reached with Google to use its AI tools to automate jobs across the Pentagon’s workforce of three million. Google’s own blog noted that its AI agents would be able to help Pentagon employees automate tasks without needing to know any code. AI agents are particularly good at writing code, something that used to require expensive software engineers. Emil Michael, under-secretary of defense for research and engineering, said that a type of war-simulation exercise that took his staff six months was done through the AI portal in six weeks. Outside the military sphere, the same logic drove Jack Dorsey, CEO of Block (owner of payments system Square) and founder of Twitter, to lay off 4,000 of the company’s 10,000 workers.

The Pentagon’s attack on Anthropic has to be seen in the context of labor costs and the open competition among Anthropic, OpenAI, xAI (Grok), Google, Palantir and others for Pentagon and other US government contracts aimed at increasing labor productivity, which is to say reducing labor costs. That includes the labor cost of software talent.

But investors will need to remember that the competition among AI giants for US government contracts is not motivated principally by the search for profits. The motivation is more to gain protection from regulation or other political interference. The AI giants would develop their tools in about the same way without any government contracts at all. Their field of competition is on a vastly greater scale. That means that, in the end, however the Anthropic-Pentagon dust-up evolves, the AI sector has the upper hand because national security has become dependent on it.