Introduction¶
AI can speed aid and accountability, but field results depend on context, data quality, and governance. These cases highlight what worked, what failed, and how to improve.
AI for crisis response¶
- Damage assessment from satellite/air imagery accelerates resource allocation but can miss informal settlements; pair with local validation teams.
- Demand forecasting for supplies reduces stockouts yet struggles with fast-changing ground truth; keep human override and rapid re-training loops.
- Key safeguard: transparency about model confidence and clear escalation when predictions conflict with field reports.
Supply-chain tracing in reconstruction¶
- Ledger-based provenance for construction materials can deter diversion but requires reliable on-ramps and tamper-resistant IDs.
- Risk scoring vendors helps spot corruption but may unfairly penalise small local firms; include appeals and manual review.
- Key safeguard: publish criteria, avoid black-box scoring, and rotate auditors to prevent capture.
Data-based human rights monitoring¶
- Crowdsourced incident reporting scales coverage but invites misinformation; use verification tiers and geolocation checks.
- Automated pattern detection can surface hotspots but risks false positives; blend OSINT with trusted local sources.
- Key safeguard: protect witnesses with redaction, consent gates, and secure storage; delay publication if safety risks persist.
Cross-cutting lessons¶
- Start in shadow mode to calibrate against human assessments before acting on outputs.
- Invest in data quality pipelines and feedback loops; retire models that drift beyond agreed thresholds.
- Maintain public transparency notes summarising methods, limits, and mitigation steps.
Conclusion¶
Responsible AI in crises and recovery demands humility, validation, and continuous oversight. Treat models as decision aids - not decision makers - and build in contestability from the start.