AI and geospatial data can improve preparedness and response to natural hazards and complex emergencies. Examples range from using deep learning to assess the damage on satellite imagery, developing machine learning models to predict humanitarian impact, or identifying hotspot areas where conflict and hazards coincide. Though the use of AI and geospatial data has benefits, its use risks doing harm to the very people it seeks to support. Risks can result from biases inherent in using geospatial data and AI. Risks also arise due to the fact that transparency and accountability shift when there is a greater reliance on big data and AI. The hype and inflated expectations around AI might also lead to a reliance on immature technologies, especially detrimental when scaling from pilot to national scale. Legal frameworks such as GDPR have been put in place in the EU for data protection; frameworks for the use of AI are underway. However, these frameworks might not directly apply to a different culture and context in the Global South. Humanitarians must be able to navigate these technical and ethical issues. Yet, there is still little understanding of the risks involved. How do biases affect risk and prediction models? How will accountability relationships change? Which legal frameworks to comply with? The disconnect between communities, humanitarians, and data scientists can exacerbate this lack of understanding. We invite submissions that present current research, case studies of how practitioners deal with these challenges or combinations thereof. We aim to identify best practices from the field and characterize gaps that future research could address.
Back