Cambridge Digital Minds

Preparing society for AI systems with real or perceived minds

Explore Our Work
University of Cambridge Centre for the Future of Intelligence

Our Work

As AI systems become more sophisticated, questions arise about whether they could one day have minds with feelings and capacity for welfare that warrant moral consideration. Experts are divided, and a growing number of people will come to believe they do. Whatever the truth, these perceptions will have profound implications for society and policy.

We address these challenges through research, field-building, education, and engagement with policymakers and industry to support well-informed public debate.

Research

Interdisciplinary, rigorous research on public, expert, and policymaker beliefs on AI consciousness and welfare through surveys, forecasting, scenario planning, and citizens' assemblies.

Capacity Building

Strengthening the digital minds field through fellowships, training programs, and an online course anchored in academic rigour.

Engagement

Fostering expert coordination and working with policymakers, industry, and the public to reduce confusion and polarisation before high-stakes conflicts emerge.

Research

Our recent publications on digital minds, AI consciousness, and AI welfare.

Chapter

AI Consciousness Will Divide Society

Caviola, L. (2026) — Forthcoming book chapter

Argues that AI consciousness will become a major source of societal division. Analyzes how stakeholders—researchers, policymakers, industry, and the public—will form conflicting views on AI moral status. Examines historical parallels and proposes strategies for reducing polarization before high-stakes conflicts emerge.

Preprint

Moral Concern for AI

Allen, C., Lewis, J. & Caviola, L. (2025)

Investigates moral intuitions about harming AI through experimental studies. Finds significant reluctance to harm AI even among those who deny AI consciousness. Explores psychological mechanisms including anthropomorphism and moral caution, suggesting behavior may diverge from stated beliefs.

Preprint

The ML Community Must Prepare for AI Consciousness, Perceived or Real

Caviola, L., Sebo, J., & Mindermann, S. (2026)

Addresses the ML community directly, arguing that AI consciousness—whether genuine or perceived—will create challenges requiring advance preparation. Outlines concrete steps researchers can take now, including evaluation frameworks and ethical guidelines. Emphasizes that waiting for scientific consensus is not viable given the pace of AI development.

Report

Subjective Experience in AI Systems: What Do AI Researchers and the Public Believe?

Dreksler, D., Caviola, L., Allen, C., et al. (2025)

Survey of 582 AI researchers and 838 US public participants on beliefs about AI developing subjective experience. Both groups estimate meaningful probabilities this century (25-30% by 2034, 60-70% by 2100), but disagree on governance and whether protective measures should be implemented now.

Journal

What Will Society Think About AI Consciousness? Lessons from the Animal Case

Caviola, L., Sebo, J., & Birch, J. (2025) — Trends in Cognitive Sciences

Examines how debates over animal consciousness can inform predictions about societal responses to AI consciousness. Analyzes patterns of public attitude formation, expert disagreement, and policy development, drawing lessons for how society might respond to sophisticated AI systems.

Preprint

The Societal Response to Potentially Sentient AI

Caviola, L. (2025)

Analyzes how society is likely to respond to potentially sentient AI, examining public opinion, media narratives, corporate responses, and policy development. Develops a framework for key variables shaping societal responses and identifies intervention points for improving outcomes.

Preprint

Public Skepticism About AI Consciousness

Ladak, A. & Caviola, L. (2025)

Documents widespread public skepticism toward AI consciousness through survey research. Explores psychological and cultural factors underlying skepticism, including intuitions about biological requirements for consciousness. Discusses implications for public communication and future policy debates.

Report

Futures with Digital Minds: Expert Forecasts in 2025

Caviola, L. & Saad, B. (2025)

Survey of 67 researchers finds a majority believe conscious AI is possible and likely, with 50% probability by 2050. Experts anticipate rapid growth in collective welfare capacity once such systems emerge, highlighting significant uncertainty and the need for early preparation.

Report

The Social Science of Digital Minds: Research Agenda

Caviola, L. (2024)

Outlines a research agenda for the social science of digital minds, covering public attitudes, expert beliefs, and policy implications. Proposes priority directions including longitudinal tracking and cross-cultural comparisons, arguing for anticipatory research before AI consciousness becomes pressing.

Preprint

Increasing Concern for Digital Beings Through LLM Persuasion

Allen, C. & Caviola, L. (2024)

Examines whether LLM interactions can shift attitudes toward digital beings. Finds certain conversations can increase moral concern for digital entities, raising questions about AI's role in shaping public attitudes. Discusses implications for AI design and potential manipulation risks.

Report

Digital Minds Takeoff Scenarios

Saad, B. & Caviola, L. (2024)

Develops scenarios for how digital minds might emerge and proliferate. Examines variables including development speed, scientific consensus, and policy responses. Explores futures from gradual emergence with coordinated governance to rapid development with disruption.

Digital Minds Fellowship

3–9 August 2026 Cambridge University 15 Fellows

Currently accepting applications. Deadline 27 March 2026. Apply here.

A selective residential fellowship for early- and mid-career researchers working on digital minds, AI consciousness, and AI welfare.

Programme Overview

A 5-day intensive residential programme hosted at Cambridge University, followed by the Digital Minds Strategy Workshop. The fellowship enables deep, cross-disciplinary engagement across philosophy, social science, technical research, policy, and governance, fostering shared norms, judgment, and coordination capacity.

Highlights

  • Structured teaching and discussion sessions
  • One-on-one mentoring with senior researchers
  • Independent and small-group project development
  • Career and field-building sessions
  • Direct participation in the Digital Minds Strategy Workshop

Summer 2026

Applications for the inaugural cohort are now open. Deadline to apply is 27 March 2026.

Learn More

Digital Minds Strategy Workshop

8–9 August 2026 Cambridge University

A two-day, output-oriented workshop bringing together fellows, mentors, and invited experts for structured, collaborative research on neglected strategic questions about how society should prepare for the emergence of digital minds.

We have a very limited number of spaces available for this workshop and intend to bring together a small cross-disciplinary group drawn from digital minds, AI policy and governance, and macrostrategy.

Session Topics

  • Mapping the societal intervention landscape
  • Systematic comparison of policy and institutional approaches
  • Scenario planning for plausible future trajectories
  • Macrostrategy development
  • Research priorities for strategy and governance

Workshop Output

Unlike typical academic conferences, these sessions are designed to contribute towards a comprehensive report that lays the groundwork for what policy and governance frameworks should look like in this emerging field.

Fellowship Connection

All fellows from the Digital Minds Fellowship are invited to attend the workshop, allowing them to contribute to research outputs and build connections with senior experts in the field.

Online Course

Build foundational understanding of digital minds, AI consciousness, and AI welfare through our facilitated online programme.

Introduction to Digital Minds
Launching 2026

Introduction to Digital Minds

An 8-week facilitated course designed to equip participants with shared vocabulary and conceptual grounding. Learn to reason well under uncertainty and engage responsibly with emerging debates on AI consciousness and moral status.

8 Weeks Facilitated learning
Project Phase Optional 4-week extension
Fellowship Track Pathway to in-person programme
Learn More

About Us

Cambridge Digital Minds is a research initiative at the University of Cambridge focused on societal preparedness for digital minds, whether perceived or real.

Our Mission

Help society prepare to make accurate, ethical, and well-reasoned decisions around digital minds, including questions of consciousness, welfare, moral status, and how we interact with them.

Our Approach

We focus on improving societal decision-making under uncertainty through research, field-building, and institutional development. This includes anticipating societal dynamics, clarifying expert agreement and disagreement, developing responsible frameworks and standards, and building communities and institutions that can inform policy and public debate.

Our Team

Lucius Caviola Principal Investigator
Henry Shevlin Associate Director
Bradford Saad Research Affiliate
Bridget Harris Online Course Lead
Pooja Khatri Online Course Lead
Austin Smith Research Assistant