70% of humanitarians use AI daily or weekly, while only 21.8% of organizations have formal policies in place
A comprehensive survey of 2,539 humanitarian professionals across 144 countries and territories reveals a striking disconnect: while seven in ten humanitarian workers use artificial intelligence (AI) tools daily or weekly, fewer than one in four organizations have established formal AI policies. This initial insight report reveals that while humanitarian workers worldwide are rapidly integrating artificial intelligence tools into their work, their organizations are struggling to keep pace with proper governance, training, and ethical frameworks.
The research, conducted by the Humanitarian Leadership Academy and Data Friendly Space, represents one of the most extensive global assessments of AI usage in the humanitarian sector and uncovers a striking "humanitarian AI paradox": individual innovation dramatically outpacing institutional capacity in supporting responsible AI implementation.
The research uncovers substantial flux in the sector.
Individual Innovation Exceeds Institutional Capacity:
Skills Gap: While humanitarians demonstrate confidence with AI at entry levels, only 3.5% possess expert-level knowledge. Surprisingly, AI skills exceed general digital capabilities at beginner levels, suggesting AI may serve as an intuitive gateway to technology adoption. Organizations are underinvesting in AI training, creating critical knowledge gaps.
Fragmented Tools: Commercial platforms dominate usage, with 69% relying on commercial AI tools such as ChatGPT, Claude, and Copilot. AI is mainly used for report writing, data summarization, translations, and research assistance.
Governance Vacuum: Despite widespread usage, fewer than 25% of organizations have AI policies. Workers express concerns about data protection, decision-making ethics, environmental impact, and over-reliance on AI versus participatory approaches.
Looking ahead, humanitarian organizations are prioritizing AI expansion in data analytics and forecasting, monitoring and evaluation, and risk and needs assessment. The findings highlight both the sector's readiness for AI transformation and the urgent need for coordinated investment in training, infrastructure, and governance frameworks. Organizations remain largely in experimentation phases, with only 8% reporting widespread AI integration, despite it being widely adopted by individual practitioners. However, with 64% of organizations providing little to no AI training for staff, this creates risks around data protection, ethics, and effectiveness in contexts that require strict neutrality and accountability standards.