Cambridge Digital Minds

This site is under development. Please enter the password to continue.

Cambridge Digital Minds

Helping society prepare for the possibility of AI systems with minds

Explore Our Work

Our Work

Based at the University of Cambridge, we conduct interdisciplinary research on the societal, policy, and governance implications of digital minds, with an emphasis on anticipating future debates and decision points.

Research

Studying public, expert, and policymaker beliefs on AI consciousness and welfare through surveys, forecasting, scenario planning, and citizens' assemblies.

Field Building

Strengthening the digital minds and AI welfare field through fellowships, training programs, conferences, and newsletters anchored in academic rigor.

Expert Coordination

Engaging with policymakers, industry, and the public to reduce confusion and polarization, and shape responsible standards before high-stakes conflicts emerge.

Research

Our recent publications on digital minds, AI consciousness, and AI welfare.

Chapter

AI Consciousness Will Divide Society

Caviola, L. (2026) — Forthcoming book chapter

Argues that AI consciousness will become a major source of societal division. Analyzes how stakeholders—researchers, policymakers, industry, and the public—will form conflicting views on AI moral status. Examines historical parallels and proposes strategies for reducing polarization before high-stakes conflicts emerge.

Preprint

The ML Community Must Prepare for AI Consciousness, Perceived or Real

Caviola, L., Sebo, J., & Mindermann, S. (2026)

Addresses the ML community directly, arguing that AI consciousness—whether genuine or perceived—will create challenges requiring advance preparation. Outlines concrete steps researchers can take now, including evaluation frameworks and ethical guidelines. Emphasizes that waiting for scientific consensus is not viable given the pace of AI development.

Report

Subjective Experience in AI Systems: What Do AI Researchers and the Public Believe?

Dreksler, D., Caviola, L., Allen, C., et al. (2025)

Survey of 582 AI researchers and 838 US public participants on beliefs about AI developing subjective experience. Both groups estimate meaningful probabilities this century (25-30% by 2034, 60-70% by 2100), but disagree on governance and whether protective measures should be implemented now.

Journal

What Will Society Think About AI Consciousness? Lessons from the Animal Case

Caviola, L., Sebo, J., & Birch, J. (2025) — Trends in Cognitive Sciences

Examines how debates over animal consciousness can inform predictions about societal responses to AI consciousness. Analyzes patterns of public attitude formation, expert disagreement, and policy development, drawing lessons for how society might respond to sophisticated AI systems.

Preprint

The Societal Response to Potentially Sentient AI

Caviola, L. (2025)

Analyzes how society is likely to respond to potentially sentient AI, examining public opinion, media narratives, corporate responses, and policy development. Develops a framework for key variables shaping societal responses and identifies intervention points for improving outcomes.

Preprint

Reluctance to Harm AI

Allen, C. & Caviola, L. (2025)

Investigates moral intuitions about harming AI through experimental studies. Finds significant reluctance to harm AI even among those who deny AI consciousness. Explores psychological mechanisms including anthropomorphism and moral caution, suggesting behavior may diverge from stated beliefs.

Preprint

Public Skepticism About AI Consciousness

Ladak, A. & Caviola, L. (2025)

Documents widespread public skepticism toward AI consciousness through survey research. Explores psychological and cultural factors underlying skepticism, including intuitions about biological requirements for consciousness. Discusses implications for public communication and future policy debates.

Report

Futures with Digital Minds: Expert Forecasts in 2025

Caviola, L. & Saad, B. (2025)

Survey of 67 researchers finds a majority believe conscious AI is possible and likely, with 50% probability by 2050. Experts anticipate rapid growth in collective welfare capacity once such systems emerge, highlighting significant uncertainty and the need for early preparation.

Report

The Social Science of Digital Minds: Research Agenda

Caviola, L. (2024)

Outlines a research agenda for the social science of digital minds, covering public attitudes, expert beliefs, and policy implications. Proposes priority directions including longitudinal tracking and cross-cultural comparisons, arguing for anticipatory research before AI consciousness becomes pressing.

Preprint

Increasing Concern for Digital Beings Through LLM Persuasion

Allen, C. & Caviola, L. (2024)

Examines whether LLM interactions can shift attitudes toward digital beings. Finds certain conversations can increase moral concern for digital entities, raising questions about AI's role in shaping public attitudes. Discusses implications for AI design and potential manipulation risks.

Report

Digital Minds Takeoff Scenarios

Saad, B. & Caviola, L. (2024)

Develops scenarios for how digital minds might emerge and proliferate. Examines variables including development speed, scientific consensus, and policy responses. Explores futures from gradual emergence with coordinated governance to rapid development with disruption.

Fellowship Programme

A selective residential fellowship for early- and mid-career researchers working on digital minds, AI consciousness, and AI welfare.

Digital Minds Fellowship

A 5-day intensive residential programme hosted at Cambridge University. The fellowship enables deep, cross-disciplinary engagement across philosophy, social science, technical research, policy, and governance, fostering shared norms, judgment, and coordination capacity.

Programme Highlights

  • Structured teaching and discussion sessions
  • One-on-one mentoring with senior researchers
  • Independent and small-group project development
  • Career and field-building sessions
  • Direct participation in the Expert Workshop

Summer 2026

Applications for the inaugural cohort will open in Spring 2026.

Learn More

Expert Workshop

A two-day, output-oriented workshop bringing together fellows, mentors, and invited experts for structured research on digital minds.

2 Days
~50 Experts
3 Outputs
8–9 Aug 2026 Cambridge University
Day 1

AI Welfare Intervention Assessment

Systematic evaluation of candidate interventions across effectiveness, tractability, risks, and interaction with AI safety.

Research Paper
Day 2

Expert Panel & Consensus

Structured elicitation of views on timelines, key uncertainties, and concrete policy recommendations.

Policy Document
Alt

Long-term Scenario Planning

Developing scenarios through ~2100 and generating novel strategic intervention ideas.

Strategic Report

Fellowship alumni automatically attend.

Learn More

Online Course

Build foundational understanding of digital minds, AI consciousness, and AI welfare through our facilitated online programme.

Launching 2026

Introduction to Digital Minds

An 8-week facilitated course designed to equip participants with shared vocabulary and conceptual grounding. Learn to reason well under uncertainty and engage responsibly with emerging debates on AI consciousness and moral status.

8 Weeks Facilitated learning
Project Phase Optional 4-week extension
Fellowship Track Pathway to in-person programme
Learn More

About Us

Cambridge Digital Minds is a research initiative at the University of Cambridge focused on societal preparedness for digital minds.

Our Mission

Help society prepare to make accurate, ethical, and well-reasoned decisions around digital minds, including questions of consciousness, welfare, and moral status, even under conditions of deep uncertainty and disagreement.

Our Approach

We focus on improving societal decision-making under uncertainty by anticipating societal dynamics, clarifying areas of expert agreement and disagreement, developing responsible frameworks and standards, and creating credible institutions that can inform policy and public debate.

Our Team

  • Lucius Caviola — Director
  • Will Millership — Director of Operations
  • Henry Shevlin — Associate Director
  • Bridget Harris — Online Course Lead
  • Pooja Khatri — Online Course Lead
  • Ali Ladak — Postdoctoral Researcher
  • Brad Saad — Senior Researcher (Affiliate)
3 Core Activities
2026 Fellowship Launch
50+ Workshop Participants
8 Week Online Course