“`html
TEL AVIV, Israel — Major tech companies in the U.S. have significantly enhanced Israel’s capability to identify and target alleged militants in Gaza and Lebanon, mainly through an increase in artificial intelligence and advanced computing services. However, this technological boost has also coincided with a disturbing rise in civilian casualties, leading to concerns that these innovations may be inadvertently contributing to the loss of innocent lives.
For years, various military forces have contracted private firms to develop specialized autonomous weapons. The current conflict demonstrates a critical phase where advanced commercial AI models from the U.S. are actively utilized in warfare, raising ethical dilemmas as these technologies were not specifically designed for life-and-death decision-making.
The Israeli armed forces employ AI to analyze extensive information gathered from intelligence, intercepted communications, and surveillance data, aiming to identify suspicious behavior and track enemy movements. Following the unexpected attack by Hamas on October 7, 2023, the integration of technologies from companies like Microsoft and OpenAI surged dramatically, as exposed by a recent investigation. This inquiry disclosed how AI systems are employed in target selection and outlined possible failures stemming from inaccurate data or flawed algorithms, utilizing internal documents, interviews with military officials, and insights from company representatives.
“This robust confirmation shows that commercial AI models are actively used in warfare contexts,” said Heidy Khlaaf, AI Now Institute’s chief AI scientist and former safety engineer at OpenAI. “The implications raise serious ethical concerns regarding technology’s role in such operations.”
The U.S. technology sector’s involvement has grown notably under prior administrations, with expected expansions in partnerships with Israel. This situation raises pivotal questions about Silicon Valley’s future contribution to automated combat scenarios, suggesting that Israel’s military practices may influence global trends in this field.
Usage of Microsoft and OpenAI AI technologies by the Israeli military reportedly skyrocketed in early March 2024, reaching nearly 200 times the levels seen prior to the October attack. The data stored on Microsoft’s servers doubled between then and mid-2024, accumulating to over 13.6 petabytes, which exceeds the storage capacity for every book in the Library of Congress by a staggering margin. Additionally, military reliance on Microsoft’s computing resources increased by approximately 66% during the initial months of the conflict.
Following the deadly initial assaults that resulted in the loss of around 1,200 lives and took over 250 hostages, the objective became the incapacitation of Hamas. The Israeli military has characterized AI as a “game changer,” accelerating the designation of targets. Consequently, the humanitarian toll has skyrocketed, with over 50,000 fatalities reported in Gaza and Lebanon, alongside widespread destruction in Gaza, according to regional health agencies.
The investigative report utilized insights from several current and former Israeli military members, including intelligence officers, many of whom requested anonymity due to the sensitive nature of the discussions. Additionally, the report included perspectives from various employees within Microsoft, OpenAI, Google, and Amazon, most of whom also remained unnamed for fear of professional repercussions.
The Israeli military acknowledges its analysts utilize AI-enhanced systems for target identification; however, they stress that these assessments must be reviewed by senior officers to comply with international regulations, carefully balancing tactical benefits against potential civilian harm. A senior intelligence official indicated that lawful military targets could encompass combatants and structures utilized by militants, asserting that human oversight remains an integral component of operations, even with AI’s involvement.
“Our AI tools significantly enhance the intelligence gathering process,” an Israeli military representative conveyed. “They enable rapid target identification while striving for accuracy, often reducing civilian casualties along the way.”
Though the Israeli military opted not to provide detailed answers regarding the deployment of commercial AI products, Microsoft refrained from commenting on the matter and failed to respond to specific inquiries concerning its AI and cloud services provided to the military. Microsoft publicly asserts its commitment to utilizing technology for positive global impacts while its 2024 Responsible AI Transparency Report emphasizes safe development practices, overlooking its substantial contracts with military entities.
OpenAI, known for developing ChatGPT, supplies advanced AI systems through Microsoft’s Azure cloud platform. Despite being a primary investor in OpenAI, Microsoft has distanced itself from any military affiliations. OpenAI maintains that it prohibits its offerings from military applications, originally forbidding uses related to warfare but recently adjusting its policies to accommodate national security uses aligned with its mission.
The complexity of establishing accountability for erroneous AI-enabled actions lies in their integration with multiple intelligence sources, including human input, which sometimes culminates in tragic outcomes. One incident illustrates this troubling reality. In November 2023, Hoda Hijazi attempted to flee violent clashes near the Lebanese border with her three daughters and mother when their vehicle was struck by an airstrike.
Instinctively, the children had been told to play outside their home to signal their presence to any Israeli drones. They traveled in a convoy with Hijazi’s uncle, a journalist, who was also using his car. A drone operated nearby, and moments later, an airstrike targeted Hijazi’s vehicle, sending it tumbling down a slope before erupting in flames. Hijazi’s uncle saved her, but her mother and daughters did not survive.
Just before departing, Hijazi recalled that one of the girls insisted on capturing photos of the neighborhood cats, uncertain of whether they would return. Reflecting on the aftermath, Hijazi lamented, “The cats remain, but my girls are gone.”
Surveillance footage taken prior to the attack depicted the family in a vehicle, which they believe should have been identifiable by Israeli drones. Following the airstrike, the military released video evidence showing strikes on over 450 Hamas targets.
An Israeli intelligence officer revealed that AI has been used to identify targets over recent years. He indicated that AI may have indicated a specific house, supported by other intelligence leading them to believe a target was present at the location.
The officer explained that command decisions are made based on input from human controllers. Errors could arise at several stages, from the initial identification of the wrong target to misidentifying vehicles involved in emergencies.
The Israeli military expressed regret for the outcome but refrained from comment on the specifics of whether AI systems informed that particular targeting decision, emphasizing that such incidents warrant closer scrutiny.
A diverse array of U.S. tech companies, including Microsoft, have supported Israel’s military operations. Google and Amazon have provided extensive AI and cloud solutions through a $1.2 billion agreement known as Project Nimbus. This initiative aims to create robust AI-driven targeting systems as Israel enhances its capability to respond to threats.
The Israeli military utilizes Microsoft Azure to aggregate data from extensive surveillance efforts, facilitating the transcription and analysis of intercepted communications. An Israeli intelligence officer noted that Azure assists in rapid searches across large text corpuses, streamlining the identification of significant conversations.
Microsoft documentation revealed a surge in the usage of these AI models following the initial attacks. Critics argue that reliance on such systems—particularly when errors arise—is troubling, highlighting AI’s limitations in accurately translating sensitive communication.
For instance, misinterpretations of Arabic phrases led to inadequate context in intelligence analysis, leading to wrongful targeting. One example cited involved confusion between terms in Arabic related to grenade launchers and payments, emphasizing the risks linked to a hasty reliance on machine translation without adequate verification.
Profiling of individuals based solely on intercepted communications or automated analyses can result in faulty assumptions. The system may misidentify groups of students as threats, furthering the risk of wrongful allegations without proper investigation.
As the pressure for rapid targeting intensifies, some younger officers may succumb to erroneous conclusions, driven by urgency rather than comprehensive analysis. Cases have emerged of AI flagging incorrect targets without thorough human assessment.
Tal Mimran, who spent a decade as a reserve legal officer in the Israeli military, noted the historical transformation in target review protocols. Previously, extensive evaluations required a sizable team, whereas AI systems now expedite numerous approvals daily.
Furthermore, Mimran expressed concern that over-dependence on AI could reinforce existing biases. “Confirmation bias can hinder independent verification,” he suggested, warning that some users might become desensitized to error correction efforts.
Microsoft’s longstanding alliance with the Israeli military became even more pronounced after the escalation in conflict. A high-ranking official in the military reported that external vendors played a vital role in maintaining operational capacity amid rising tensions. She attributed significant operational advancements in intelligence capabilities to AI technologies in partnership with leading U.S. tech firms.
A three-year contract with Microsoft initiated in 2021 reinforces this connection, amounting to $133 million and solidifying the Israeli military’s status as Microsoft’s second-largest military client outside of the U.S. The military’s critical use case categorization with Microsoft includes hundreds of active project participants across multiple divisions.
A support request detailed urgent updates needed to maintain life-saving systems amidst the ongoing crisis. Microsoft’s response teams reportedly managed numerous queries from the Israeli military, emphasizing the depth of their interdependence during this tumultuous period.
Notably, the Israeli Defense Forces have long been at the forefront of integrating AI in warfare. In 2021, they initiated an AI program designed to analyze data for targeted attacks, which they referred to as their “First AI War,” significantly unlike previous military endeavors.
Employees in various tech companies raised ethical worries about the partnerships with military entities, some reportedly being dismissed for opposing actions that could facilitate violence. “Tech has become the modern ammunition in conflicts,” argued one displaced employee advocating against Microsoft’s military contracts.
As tensions continue, the U.S. government and tech giants relentlessly pursue the development of sophisticated AI weapon systems. Moving forward, the need for clearly defined ethical frameworks and increased accountability is essential as these powerful technologies perpetuate conflicts and result in heavy civilian tolls.
Amid the turmoil in Gaza and Lebanon, families affected by violence continue to seek justice and answers about the devastating losses they’ve endured. “Why was my children’s laughter extinguished on that fateful day?” one grieving father poignantly reflected, capturing the heartbreak amidst the chaos.
“`