| Paper authors | Jason Sassine |
| In panel on | Humanitarian Accountability in Technology |
| Paper presenter(s) will be presenting |
In-Person / |
As digital tools increasingly influence humanitarian decision-making, the question is no longer whether to use AI but how to ensure it aligns with humanitarian accountability. Frontier AI models, such as Large Reasoning Models (LRMs), are marketed as capable of structured, reflective thinking. Yet research like The Illusion of Thinking (Shojaee et al., 2025) highlights critical gaps: these models often fail under complexity, produce inconsistent reasoning, and cannot reliably justify their outputs.
In humanitarian contexts, this poses a serious challenge. Decisions made with opaque or overconfident AI systems can erode trust, misdirect resources, or cause unintended harm. This paper presents a grounded alternative: the development and use of local, transparent AI systems within the Lebanese Red Cross. These systems are designed around accountability principles emphasizing explain ability, data sovereignty, and adaptability to operational realities.
By reflecting on both model limitations and practical deployment in the field, we explore what accountable technology looks like in humanitarian work. We propose that accountability is not a technical add-on, but a design foundation requiring systems that communities can understand, validate, and govern. This contribution aims to support a more responsible path for integrating AI into humanitarian action.