Home Business US tech companies provided AI models to Israel, prompting concerns about technology’s impact on warfare.

US tech companies provided AI models to Israel, prompting concerns about technology’s impact on warfare.

0

TEL AVIV, Israel — Recent advancements in artificial intelligence and computing services provided by U.S. tech firms have significantly enhanced Israel’s ability to monitor and eliminate perceived threats in Gaza and Lebanon. This escalation has not only led to a surge in civilian casualties but has also stirred concerns regarding the ethical implications of using such technology in warfare.

For years, national militaries have engaged private companies to develop autonomous weapons, but the current situation represents a notable instance where American-developed AI systems are being utilized in active combat scenarios. These commercial models were not initially designed for the purpose of making life-and-death decisions.

The Israeli military is employing AI to sift through extensive amounts of intelligence data, including intercepted communications and surveillance footage, to identify suspicious activities and track enemy movements. Following a surprise attack by Hamas militants on October 7, 2023, the military’s reliance on technologies from Microsoft and OpenAI experienced immense growth, as revealed by an extensive investigation.

Details from the inquiry shed light on how AI systems are employed for target selection and highlight the potential failures that can occur, such as using inaccurate data or flawed algorithms. Insights were gathered from internal documents and interviews with current and former officials from both Israel and the tech companies involved.

In the aftermath of the October attack, which resulted in approximately 1,200 deaths and over 250 hostages taken, Israel’s principal aim has been to dismantle Hamas. The military has referred to AI as a “game changer” for enhancing target identification speed. Reports indicate that since the conflict began, over 50,000 individuals have died in Gaza and Lebanon, with nearly 70% of Gaza’s infrastructure being heavily damaged.

“This marks the first confirmation that commercial AI models are actively employed in combat situations,” stated Heidy Khlaaf, a leading AI scientist and former safety engineer with OpenAI. She emphasized the significant implications for the future role of technology in warfare, particularly regarding its ethical and legal ramifications.

US tech giants, notably Microsoft, have nurtured a longstanding partnership with the Israeli military. This collaboration intensified following the Hamas attack, necessitating greater use of external vendors as military operations strained Israel’s own capacity. In a presentation by Colonel Racheli Dembinsky, head of the military’s information technology division, AI’s substantial impact on operational efficiency in Gaza was underscored, with visuals from Microsoft Azure, Google Cloud, and Amazon Web Services displayed.

Analysis showed that the military’s utilization of Microsoft and OpenAI’s AI offerings surged dramatically, reaching nearly 200 times the previous levels before the October 7 incident. Furthermore, the volume of data stored on Microsoft servers doubled from that time until July 2024, totaling over 13.6 petabytes — equivalent to around 350 times the data storage capacity of the Library of Congress. The military’s use of Microsoft’s server resources increased by approximately two-thirds in the early months of conflict.

Microsoft refrained from commenting on its involvement with the Israeli military and did not respond to an inquiry concerning its cloud and AI services. However, the company maintains that respecting human rights is central to its values and asserts a commitment to promoting technology’s positive role globally. Their Responsible AI Transparency Report for 2024 notes efforts to manage potential risks associated with generative AI.

OpenAI has claimed that it does not have a formal partnership with Israel’s military and emphasizes its policies that discourage the use of its products for harmful military purposes. However, OpenAI revised its terms last year, permitting limited use for “national security” missions.

While the Israeli military refrained from answering detailed queries about their commercial AI usage, they affirm that AI-enabled systems assist analysts in identifying targets, ensuring these are cross-referenced by senior officers to adhere to international law guidelines in terms of military advantage versus potential civilian harm.

In addition to Microsoft, other American tech companies like Google and Amazon provide cloud services to the Israeli military under “Project Nimbus,” a contract valued at $1.2 billion. This partnership includes testing in-house AI-powered targeting systems since its initiation in 2021. Tech firms such as Cisco and Dell also supply necessary infrastructure, with assistance from Palantir Technologies, which cooperates with Microsoft on U.S. defense projects.

Moreover, both Google and OpenAI recently modified their policies, allowing for the use of AI technologies in national security contexts. Google’s commitment is to ensure that its development and deployment of AI technology is responsible, prioritizing the protection of individuals and supporting national security efforts.

The Israel Defense Forces harness Microsoft Azure to aggregate data amassed from extensive surveillance efforts, transcribing conversations and translating messages, enabling cross-referencing with existing military targeting systems. This data processing aids in quickly identifying relevant communications and potential threats.

However, there are concerns regarding the reliability of these AI systems. Military officers emphasize that errors can arise due to technological limitations, including challenges in translating languages accurately, which can lead to critical mistakes in target identification. Although checks are in place to verify translations, inaccuracies reported by intelligence personnel highlight ongoing challenges faced in operational environments.