Monday, July 21, 2025
Google search engine
HomeBUSINESSSilicon Valley's New Frontier: The Militarization of Big Tech

Silicon Valley’s New Frontier: The Militarization of Big Tech


Tech companies are going to war. This isn’t a metaphor. After years of avoiding public links to the military-industrial complex, Big Tech has taken to the hills. Donald Trump’s return to the White House has been the final push for many companies to stop being wary of signing contracts with the military. Beyond the tech magnates’ connection with the U.S. president, showcased at his inauguration ceremony, Trump wants to invest a trillion dollars by 2026 to “modernize” the armed forces, which, in his view, involves introducing artificial intelligence (AI) into defense.

That’s music to the ears of Silicon Valley giants, who have seen that the Republican magnate means business. OpenAI, Google, Anthropic, and Elon Musk’s AI company xAI have each landed contracts worth up to $200 million to foment advanced AI capabilities within the Department of Defense.

Tech companies hiring Pentagon officials is nothing new. Meta has recently spearheaded efforts in this direction, according to Forbes, “to help sell its virtual reality and AI services to the federal government.” What’s less common is the reverse hiring process. In June, the U.S. army announced the appointment of four reserve lieutenant colonels to its new Detachment 201, also known as the Executive Innovation Corps, tasked with “fusing cutting-edge technological expertise with military innovation.” Those chosen are Adam Bosworth, Meta’s chief technology officer and a close confidant of Mark Zuckerberg; Kevin Weil, OpenAI’s product manager; Shyam Sankar, Palantir’s chief technology officer; and Bob McGrew, a former executive at Palantir and OpenAI.

The fact that there are Big Tech executives in military fatigues is both symbolic and indicative of the times we’re living in. The lines between Silicon Valley and the Pentagon are rapidly blurring.

The courtship has been constant for some time. In February, Google removed the restriction on developing weapons or tools for mass surveillance from its code of conduct. Microsoft acknowledged in May that, since the start of the Israeli invasion of Gaza, it has sold advanced AI technology and cloud computing services to the Israeli army. OpenAI, the developer of ChatGPT, won another $200 million contract in June to provide its generative AI tools to the Pentagon. The company also changed its usage policy in January 2024 to remove the ban on using its technology for “military and war” tasks: now, “national security use cases that align with our mission” are permitted. Back in December, the company announced a partnership with Anduril, a military technology startup that has formed a consortium with Palantir to enter defense tenders.

A Ranger participates in the IVAS Capability Set 4 tropical weather testing in Camp Santiago, Puerto Rico, in March 2021.

In November, Meta revealed that it had given the green light to make its AI models available to military contractors Lockheed Martin and Booz Allen Hamilton. Scale AI, the company in which Meta will invest $14.3 billion and whose founder, Alexandr Wang, has been hired for its general AI research division, has been chosen by the Pentagon to conduct testing and evaluation of the large language models that the military will use. In May of this month, the company announced an agreement with Anduril to develop virtual and mixed reality headsets for soldiers.

The “economy of genocide”

The United Nations Special Rapporteur on the Occupied Palestinian Territories, Francesca Albanese, describes in a report how corporate technology, cloud service providers, and arms companies are deeply intertwined in what she calls an “economy of genocide.” According to the report, Microsoft, HP, IBM, Google, and Amazon, among others, are implicated in surveillance technologies deployed there. IBM has contributed to the government’s collection and use of biometric databases of Palestinians, while Microsoft and Palantir, as well as Google and Amazon, provide cloud services and support for the Israeli government and military systems. Albanese has been sanctioned by the U.S. for her whistleblowing.

“From the perspective of the history of technology, I would say there is a continuity. Our Western concept of modern technology has its genesis in the military or security sphere,” says Lorena Jaume-Palasí, an expert in ethics and legal philosophy applied to technology. The internet was conceived as a secure communications system for the armed forces. Before taking us to our destinations, GPS guided missiles and submarines. And there are countless examples like these.

Then there’s the question of size. Eight of the world’s 10 largest companies by market capitalization are technology companies, and American: Nvidia, Microsoft, Apple, Amazon, Alphabet, Meta, Broadcom (semiconductor manufacturers), and Tesla. Only two, Saudi Aramco and Berkshire Hathaway, are involved in other businesses. It would be rash to underestimate the influence of the world’s most powerful industry. They have managed, for example, to have the development of increasingly powerful AI considered a matter of national security, even though it is driven by profit and harms the environment. Trump himself has said on several occasions that American companies must beat China in the AI arms race.

“We argue that this is simply a cover for these companies to concentrate even more power and funding,” says Heidy Khlaaf, chief AI scientist at the AI Now Institute, a research center focused on the societal consequences of AI. Presenting themselves as protagonists of a quasi-civilizational crusade protects tech companies from “regulatory friction,” branding any call for accountability as “a detriment to national interests.” And it allows them to position themselves “not only as too big, but also as too strategically important to fail,” reads a recent AI Now Institute report.

However, the fact that large commercial technology corporations handle national security issues can cause problems. “Building on top of widely available foundation models, like Meta’s Llama or OpenAI’s GPT-4, also introduces cybersecurity vulnerabilities, creating vectors through which hostile nation-states and rogue actors can hack into and harm the systems our national security apparatus relies on,” Khlaaf recently wrote in a New York Times op-ed. These systems can be manipulated by “poisoning the data” they are trained with. “AI companies have been able to circumvent the military standards that defense systems must follow, promoting an unfounded narrative of an AI arms race,” the engineer explains to EL PAÍS. “National security remains a key force shaping policymaking around AI, and is used by companies in the sector both to avoid regulations and to attract investment.”

Employee protests

Khlaaf notes that these corporations are able to do business with the military sector thanks to all of us. “The personally identifiable information used to train models allows AI to be used for military purposes, such as in ISTAR (intelligence, surveillance, target acquisition, and reconnaissance) capabilities, as this data allows systems to monitor and target specific populations,” she emphasizes. “Ultimately, whether or not we are users of AI tools, our data makes it possible for AI to be used for military and surveillance purposes without our consent.”

People protesting against Google's Project Nimbus contract with Israel and artificial intelligence for warfare in San Francisco.

The new direction of large technology companies is generating internal contradictions. Some employees have organized protests or even resigned over their companies’ ties to the military sector. Among the most recent episodes are the protests in April of last year by Google employees at the multinational’s headquarters in New York, Sunnyvale (California), San Francisco, and Seattle. The reason: the so-called Nimbus project, a contract worth approximately $1.2 billion to provide cloud solutions to the Israeli government and its armed forces. These protests resulted in 28 dismissals.

More recently, in April of this year, Microsoft fired two employees who publicly complained about the supply of AI to Israel. In February, five other employees were ejected from a meeting at the company’s Redmond headquarters with CEO Satya Nadella for protesting contracts to provide artificial intelligence and cloud computing services to the Israeli military.

“Western democratic values are under threat,” Google DeepMind founder Demis Hassabis told Axios shortly after his parent company changed the company’s code of conduct to accommodate military-related activities. “We have a duty to be able to help with what we are uniquely qualified and positioned to do.” The Nobel Prize winner in chemistry cited the development of defenses against AI-powered cyberattacks and biological attacks as an example. “I’ve said on several occasions that I’m against autonomous weapons, but some countries are building them. That’s simply a reality.”

For Raquel Jorge, of Spain’s Elcano Royal Institute, the explanation for the change in direction of technology companies lies in the new defense context. There have always been wars in the world, but it’s been a while since one directly affected U.S. national security interests. “On the one hand, we have the war in Ukraine since 2022 and the war in Gaza since last year. On the other, Donald Trump’s return to the White House, who has promised increases in defense spending and is demanding more resources from NATO allies,” she explains. “All of this means that the defense context is now very incremental, which makes it easier for technology companies, which were previously very careful with their narrative in this area, to feel more comfortable talking about it.”

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition



RELATED ARTICLES

Leave a reply

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments