Contact Us

LLMs for Deliberative Dialogue

In an era of increasing polarisation, large language models offer a unique opportunity to facilitate meaningful dialogue across ideological divides. Our research explores how these powerful AI systems can be designed to promote balanced, constructive conversations that strengthen democratic participation rather than deepen divisions. We focus on developing ethical frameworks and practical tools that help communities engage in productive deliberation on complex social and political issues.

Overview

Language models have unprecedented potential to transform how communities engage in democratic deliberation. Our research investigates how these powerful tools can be designed and deployed to reduce polarisation rather than amplify it, creating spaces for constructive dialogue across ideological divides. We work at the intersection of natural language processing, political science, and peace studies to develop frameworks that ensure AI-mediated conversations remain balanced, inclusive, and productive. Our approach prioritises ethical considerations, ensuring that AI systems support rather than supplant human agency in democratic processes.
People engaged in conversation representing dialogue and communication

Research Focus Areas

Bias Detection & Mitigation

Developing techniques to identify and reduce political, cultural, and ideological biases in conversational AI systems.

AI-Mediated Dialogue Platforms

Creating platforms that use AI to facilitate structured dialogue between groups with opposing viewpoints.

Ethical Deployment Frameworks

Establishing guidelines and best practices for deploying LLMs in politically sensitive contexts.

Deliberative Quality Assessment

Measuring and improving the quality of AI-facilitated democratic deliberations.

Related Publications

Collaborate With Us

Interested in partnering on llms for deliberative dialogue research? We welcome collaborations with academic institutions, NGOs, and policymakers.