Bias Detection & Mitigation
Developing techniques to identify and reduce political, cultural, and ideological biases in conversational AI systems.
In an era of increasing polarisation, large language models offer a unique opportunity to facilitate meaningful dialogue across ideological divides. Our research explores how these powerful AI systems can be designed to promote balanced, constructive conversations that strengthen democratic participation rather than deepen divisions. We focus on developing ethical frameworks and practical tools that help communities engage in productive deliberation on complex social and political issues.
Developing techniques to identify and reduce political, cultural, and ideological biases in conversational AI systems.
Creating platforms that use AI to facilitate structured dialogue between groups with opposing viewpoints.
Establishing guidelines and best practices for deploying LLMs in politically sensitive contexts.
Measuring and improving the quality of AI-facilitated democratic deliberations.
2025 IEEE 9th Forum on Research and Technologies for Society and Industry (RTSI) , pp. 308-313
Interested in partnering on llms for deliberative dialogue research? We welcome collaborations with academic institutions, NGOs, and policymakers.