Digital Minds in 2025: A Year in Review
AI consciousness enters the public discourse, Anthropic leads by example, and the field begins to grow.
December 18, 2025Preparing society for AI systems with real or perceived minds
Explore Our WorkAs AI systems become more sophisticated, questions arise about whether they could one day have minds with feelings and capacity for welfare that warrant moral consideration. Experts are divided, and a growing number of people will come to believe they do. Whatever the truth, these perceptions will have profound implications for society and policy.
We address these challenges through research, field-building, education, and engagement with policymakers and industry to support well-informed public debate.
Interdisciplinary, rigorous research on public, expert, and policymaker beliefs on AI consciousness and welfare through surveys, forecasting, scenario planning, and citizens' assemblies.
Strengthening the digital minds field through fellowships, training programs, and an online course anchored in academic rigour.
Fostering expert coordination and working with policymakers, industry, and the public to reduce confusion and polarisation before high-stakes conflicts emerge.
Our recent publications on digital minds, AI consciousness, and AI welfare.
Argues that AI consciousness will become a major source of societal division. Analyzes how stakeholders—researchers, policymakers, industry, and the public—will form conflicting views on AI moral status. Examines historical parallels and proposes strategies for reducing polarization before high-stakes conflicts emerge.
Investigates moral intuitions about harming AI through experimental studies. Finds significant reluctance to harm AI even among those who deny AI consciousness. Explores psychological mechanisms including anthropomorphism and moral caution, suggesting behavior may diverge from stated beliefs.
Addresses the ML community directly, arguing that AI consciousness—whether genuine or perceived—will create challenges requiring advance preparation. Outlines concrete steps researchers can take now, including evaluation frameworks and ethical guidelines. Emphasizes that waiting for scientific consensus is not viable given the pace of AI development.
Survey of 582 AI researchers and 838 US public participants on beliefs about AI developing subjective experience. Both groups estimate meaningful probabilities this century (25-30% by 2034, 60-70% by 2100), but disagree on governance and whether protective measures should be implemented now.
Examines how debates over animal consciousness can inform predictions about societal responses to AI consciousness. Analyzes patterns of public attitude formation, expert disagreement, and policy development, drawing lessons for how society might respond to sophisticated AI systems.
Guides, reports, podcasts, and more from our work on digital minds.
A practical guide for people who wonder or are concerned whether AI chatbots are conscious. More and more people report believing their AI is conscious based on personal conversations. We created this public resource to help.
The guide makes two key points: Today's AIs are probably not conscious, but we cannot be certain. Current AIs are highly skilled at appearing conscious, and humans are prone to projecting agency onto them. But it's still important to take AI consciousness seriously—future systems could be conscious, and that possibility demands preparation.
Visit WhenAISeemsConscious.org →We surveyed 67 experts about whether, when, and how digital minds might be created. Key findings: It's very likely that digital minds are possible in principle, with a 50% median estimate that they will be created by 2050.
Conditional on digital minds arriving by 2040, their collective welfare capacity could exceed humanity's within a decade. There's nothing approaching consensus on whether their welfare will be positive or negative—humanity's poor track record with vulnerable groups is reason for concern.
Read the full report →Currently accepting applications. Deadline 27 March 2026. Apply here.
A selective residential fellowship for early- and mid-career researchers working on digital minds, AI consciousness, and AI welfare.
A 5-day intensive residential programme hosted at Cambridge University, followed by the Digital Minds Strategy Workshop. The fellowship enables deep, cross-disciplinary engagement across philosophy, social science, technical research, policy, and governance, fostering shared norms, judgment, and coordination capacity.
Applications for the inaugural cohort are now open. Deadline to apply is 27 March 2026.
Learn MoreA two-day, output-oriented workshop bringing together fellows, mentors, and invited experts for structured, collaborative research on neglected strategic questions about how society should prepare for the emergence of digital minds.
We have a very limited number of spaces available for this workshop and intend to bring together a small cross-disciplinary group drawn from digital minds, AI policy and governance, and macrostrategy.
Unlike typical academic conferences, these sessions are designed to contribute towards a comprehensive report that lays the groundwork for what policy and governance frameworks should look like in this emerging field.
All fellows from the Digital Minds Fellowship are invited to attend the workshop, allowing them to contribute to research outputs and build connections with senior experts in the field.
Build foundational understanding of digital minds, AI consciousness, and AI welfare through our facilitated online programme.
An 8-week facilitated course designed to equip participants with shared vocabulary and conceptual grounding. Learn to reason well under uncertainty and engage responsibly with emerging debates on AI consciousness and moral status.
Cambridge Digital Minds is a research initiative at the University of Cambridge focused on societal preparedness for digital minds, whether perceived or real.
Help society prepare to make accurate, ethical, and well-reasoned decisions around digital minds, including questions of consciousness, welfare, moral status, and how we interact with them.
We focus on improving societal decision-making under uncertainty through research, field-building, and institutional development. This includes anticipating societal dynamics, clarifying expert agreement and disagreement, developing responsible frameworks and standards, and building communities and institutions that can inform policy and public debate.