| Paper authors | Beáta Paragi |
| In panel on | Practitioner-Academic virtual roundtable: “Taking stock: How is AI change humanitarian work?” |
| Paper presenter(s) will be presenting |
|
Assuming that AI implemented in and by the humanitarian sector can potentially transform the modes of humanitarian governance well beyond the use of various ‘digital tools’, it is worthwhile to raise the question how AI (artificial, i.e. non-human intelligence), as a notion, can be reconciled conceptually with the core assumptions of classic humanitarian work (citing Donini 2015, 106: there exists a common humanity - the human condition is a universal one - it is possible to generate consensus on the nature and modalities of the forms of humanitarian action that arise from this). How does ‘artificial’ relate to the first humanitarian principle (humanity) which is about to save lives and alleviate suffering in a manner that respects and restores personal dignity?
How can AI-assisted operations provide dignity-proof assistance? Does the 'humanitarian' cease to be human(itarian) if humanitarian work is overtaken by AI and humanitarian systems are operated or administered in an AI-inspired spirit? The risk of the (potentially) blurred borders between automated 'backend' processes of aid organizations (tender writing, supply chain management, recruitment, biometric ID management systems, etc.) and the 'frontend' (interfaces where the AI-assisted humanitarian worker interacts with the traumatized beneficiaries) recalls a sort of organized immorality (or amorality?). It implies that humans can easily become links in a tyrannic, technology-governed chain which is automated for better compliance, efficiency, or non-material profits, while core human(itarian) values (care, compassion, relations) cannot but remain uncoded.