Home Money & Business Business AI has the potential to enhance humanitarian efforts, but it may also present significant drawbacks.

AI has the potential to enhance humanitarian efforts, but it may also present significant drawbacks.

0
AI has the potential to enhance humanitarian efforts, but it may also present significant drawbacks.

NEW YORK — The International Rescue Committee (IRC) is working to adapt to the significant rise in individuals forced from their homes in recent years by exploring new efficiencies, including the integration of artificial intelligence (AI) technologies.

Since 2015, the IRC has been developing Signpost, a comprehensive suite of mobile applications and social media resources designed to respond to inquiries in various languages for those in crisis situations. Thus far, the Signpost initiative, which collaborates with numerous organizations, has reached 18 million individuals. However, the IRC aims to expand its capacity significantly by leveraging advanced AI tools.

The demand for humanitarian aid has surged due to conflicts, climate disasters, and economic challenges, with over 117 million people anticipated to be forcibly displaced by 2024, as reported by the United Nations refugee agency. Facing an increasing number of individuals requiring assistance and serious funding shortages, humanitarian organizations are turning to AI technologies to bridge the gap between burgeoning needs and limited resources.

To achieve its objective of assisting half of the displaced population within the next three years, the IRC is developing a network of AI-driven chatbots. These chatbots will enhance the abilities of humanitarian workers and local organizations engaged with the Signpost initiative. Currently, the project is operational in countries including El Salvador, Kenya, Greece, and Italy, providing assistance in 11 languages. It utilizes a blend of large language models from leading tech firms like OpenAI, Anthropic, and Google.

The chatbot framework also incorporates customer service technology from Zendesk and receives additional support from companies such as Google and Cisco Systems.

In addition to creating these innovative tools, the IRC aims to extend this technological infrastructure to other nonprofit humanitarian organizations at no cost. The goal is to establish shared technological resources that organizations lacking technical expertise can adopt without needing to negotiate terms with tech companies or manage the associated deployment risks.

Jeannie Annan, the IRC’s Chief Research and Innovation Officer, stated, “We are striving for clarity regarding valid concerns while simultaneously embracing the optimistic possibilities that technology offers, ensuring that those we serve are not overlooked in solutions that could scale in ways that traditional human interaction or other technologies cannot.”

The chatbot responses provided are carefully vetted by local organizations to ensure accuracy and sensitivity to the precarious circumstances that users may face. For instance, one query involved a woman from El Salvador seeking shelter and services while traveling through Mexico with her child. The chatbot promptly supplied a list of local resource providers.

More intricate or sensitive inquiries are routed to human responders for more nuanced assistance.

A significant concern regarding these tools is the possibility of failure; for example, if conditions change abruptly, an ill-informed chatbot response could not only be misleading but potentially perilous.

Another risk is the collection of sensitive data regarding vulnerable populations, which could lead to exploitation by malicious actors. There are worries that personal information could be compromised by hackers or mistakenly disclosed to authoritarian regimes.

To address these concerns, IRC has established agreements with technology partners to ensure that their AI models will not utilize the data generated by IRC, local groups, or the beneficiaries of services. Efforts have also been made to anonymize this data, stripping away personal details and locational information.

In conjunction with the Signpost.AI initiative, the IRC is testing additional tools, such as a digital automated tutor and integrated mapping systems to bolster crisis response efforts.

Cathy Petrozzino from the nonprofit research organization MITRE noted that while AI tools possess tremendous potential, they are accompanied by considerable risks. Responsible use of such technologies requires organizations to scrutinize their effectiveness, fairness, and data privacy protections.

She highlighted the importance of involving a diverse range of stakeholders in the development and governance of these initiatives — including experts with a rich understanding of the relevant context, legal advisors, and representatives from the user communities — rather than relying solely on tech specialists.

“There are numerous promising models that have failed because they were not developed collaboratively with the user base,” she explained.

For initiatives with the potential for significant impact, Petrozzino urged organizations to engage external experts for objective evaluations of their methods. Designers of AI systems should consider the interactions with existing systems and commit to ongoing monitoring of the models’ performance.

Helen McElhinney, executive director of CDAC Network, stressed the necessity of consulting displaced individuals and other beneficiaries in the design process, noting that while this may require additional time and resources, neglecting their insights presents serious safety and ethical challenges — in addition to unlocking invaluable local knowledge.

Humanitarian service recipients should be informed if any AI models will analyze their exchanged information, even if intended to enhance the organization’s responsiveness. This highlights the need for meaningful, informed consent, as well as clarity on who is accountable for any major decisions made by these models regarding resource distribution.

Degan Ali, CEO of the nonprofit Adeso operating in Somalia and Kenya, has long championed a shift in power dynamics within international development to empower local organizations. She raised concerns about how IRC and similar organizations will tackle access issues, pointing to potential barriers such as a lack of devices, internet, or electricity during crises like Hurricane Helene.

Ali also noted that many local organizations lack adequate representation at major humanitarian conferences where AI ethics are debated, with few staff members having the seniority and expertise necessary to contribute meaningfully to these discussions. Nevertheless, local organizations understand the significant impact these technologies can have.

“We must exercise extreme caution to avoid perpetuating power imbalances and biases through technology,” Ali cautioned. “The most intricate questions will always require local, contextual, and lived experiences for meaningful responses.”