Home Business US technology leaders provided Israel with AI frameworks, sparking concerns over technology’s involvement in military conflicts.

US technology leaders provided Israel with AI frameworks, sparking concerns over technology’s involvement in military conflicts.

0
US technology leaders provided Israel with AI frameworks, sparking concerns over technology’s involvement in military conflicts.
#image_title

TEL AVIV, Israel — Recent advancements in artificial intelligence (AI) and computing technologies have significantly enhanced Israel’s ability to identify and eliminate numerous suspected militants in Gaza and Lebanon with increased efficiency. However, this surge in technological capability has also led to an alarming rise in civilian casualties and growing apprehensions regarding the potential contribution of these tools to the loss of innocent lives.

For years, militaries have turned to private enterprises for the creation of customized autonomous weaponry. Nonetheless, the current conflicts involving Israel showcase a prominent example of how commercial AI products developed in the United States are utilized in active combat scenarios, despite prior reservations about these technologies being designed for life-and-death decisions.

The Israeli military employs AI to analyze large volumes of intelligence, intercepted communications, and surveillance data to detect suspicious activities and track enemy movements. Following the unexpected assault by Hamas militants on October 7, 2023, the military’s use of technology from Microsoft and OpenAI surged dramatically, as revealed by an investigative report.

This investigation uncovered new insights regarding the mechanisms by which AI systems determine targets and the potential for errors resulting from incorrect data or flawed algorithms. The findings were supported by internal documents, data, and exclusive interviews with both current and former personnel associated with Israeli security and tech firms.

In the wake of the Hamas attack that resulted in approximately 1,200 fatalities and over 250 captives, Israel declared its intention to eliminate Hamas. Officials in the Israeli military hailed AI as a significant factor that would enable faster target identification. Since the onset of the conflict, casualties in Gaza and Lebanon have exceeded 50,000, with nearly 70% of buildings in Gaza reported as destroyed, according to local health authorities.

“This marks the first definitive evidence we have that commercial AI systems are being employed in warfare,” stated Heidy Khlaaf, a leading AI scientist at the AI Now Institute and a former safety engineer at OpenAI. “The consequences for how technology facilitates such unethical and unlawful military operations are substantial.”

U.S. tech companies have seen a notable increase in their involvement with the Israeli military during the ongoing conflict. Among them, Microsoft has maintained a longstanding relationship with the Israeli military for decades. Following the Hamas incident, this partnership deepened as the conflict strained Israel’s own server capabilities, leading to increased dependence on external vendors, as described by the military’s chief IT officer, Colonel Racheli Dembinsky. She emphasized that AI has provided “substantial operational effectiveness” during military operations.

An analysis of internal information revealed that the Israeli military’s utilization of Microsoft and OpenAI’s AI solutions surged nearly 200 times just before and after the October 7 attack. The data stored on Microsoft servers doubled during this period, reaching over 13.6 petabytes, which is sufficient to hold approximately 350 times the written content of the Library of Congress. Additionally, there was a nearly two-thirds rise in the military’s access to Microsoft’s extensive server facilities within the war’s initial months.

Despite requests for comments on this situation, Microsoft opted not to respond. In a general statement on its website, the company reiterated its commitment to human rights and the positive potential of technology globally. In its 2024 Responsible AI Transparency Report, Microsoft outlined its dedication to addressing generative AI risks but did not mention its military contracts.

The advanced AI models utilized, particularly those from OpenAI—which is known for producing ChatGPT—are made available through Microsoft’s cloud infrastructure, where they are procured by the Israeli military. OpenAI has emphasized it does not have any formal agreements with Israel’s defense forces. Moreover, their usage policies traditionally prohibited the application of their products for military purposes, although they have recently modified these terms to allow for applications related to national security.

The Israeli military has declined to provide specific details regarding the usage of commercial AI systems from American firms, but they stated that their analysts employ AI to aid in target identification and evaluation, ensuring adherence to international laws while balancing military gains against potential collateral damage. They maintained that these AI tools enhance the accuracy and efficiency of intelligence processes, claiming they can produce more targets without sacrificing precision, often mitigating civilian harm.

Other U.S. tech companies, like Google and Amazon, have contributed to the Israeli military’s operations, especially under “Project Nimbus,” a $1.2 billion agreement initiated in 2021 to explore AI-driven targeting technologies. Additionally, firms such as Cisco, Dell, and IBM’s Red Hat have supplied cloud computing resources to the military, while Palantir Technologies has formed strategic partnerships to support Israel’s military efforts with AI systems.

Following the update of OpenAI’s terms, Google recently revised its policy as well, removing previous prohibitions against using its AI technologies for military purposes. The company affirmed its commitment to responsibly advancing AI that safeguards individuals, stimulates global growth, and endorses national security.

The Israeli military relies on Microsoft Azure to analyze data gathered from extensive surveillance practices, which includes transcription and translation of various communication forms, according to an intelligence officer familiar with these operations. This data can subsequently be compared against Israel’s internal targeting systems.

While the Israeli military has indicated that bilingual personnel are tasked with reviewing translation accuracy, there have been reports of targeting errors that stem from incorrect machine-translated information.

In conclusion, the complexities and ethical implications of rapidly deploying commercial AI technologies in warfare continue to spark debate, with the potential for substantial consequences that extend beyond immediate tactical advantages.