Digital Minds in 2025: A Year in Review
AI consciousness enters the public discourse, Anthropic leads by example, and the field begins to grow.
December 18, 2025This site is under development. Please enter the password to continue.
Helping society prepare for the possibility of AI systems with minds
Explore Our WorkBased at the University of Cambridge, we conduct interdisciplinary research on the societal, policy, and governance implications of digital minds, with an emphasis on anticipating future debates and decision points.
Studying public, expert, and policymaker beliefs on AI consciousness and welfare through surveys, forecasting, scenario planning, and citizens' assemblies.
Strengthening the digital minds and AI welfare field through fellowships, training programs, conferences, and newsletters anchored in academic rigor.
Engaging with policymakers, industry, and the public to reduce confusion and polarization, and shape responsible standards before high-stakes conflicts emerge.
Our recent publications on digital minds, AI consciousness, and AI welfare.
Argues that AI consciousness will become a major source of societal division. Analyzes how stakeholders—researchers, policymakers, industry, and the public—will form conflicting views on AI moral status. Examines historical parallels and proposes strategies for reducing polarization before high-stakes conflicts emerge.
Addresses the ML community directly, arguing that AI consciousness—whether genuine or perceived—will create challenges requiring advance preparation. Outlines concrete steps researchers can take now, including evaluation frameworks and ethical guidelines. Emphasizes that waiting for scientific consensus is not viable given the pace of AI development.
Survey of 582 AI researchers and 838 US public participants on beliefs about AI developing subjective experience. Both groups estimate meaningful probabilities this century (25-30% by 2034, 60-70% by 2100), but disagree on governance and whether protective measures should be implemented now.
Examines how debates over animal consciousness can inform predictions about societal responses to AI consciousness. Analyzes patterns of public attitude formation, expert disagreement, and policy development, drawing lessons for how society might respond to sophisticated AI systems.
Guides, reports, podcasts, and more from our work on digital minds.
A practical guide for people who wonder or are concerned whether AI chatbots are conscious. More and more people report believing their AI is conscious based on personal conversations. We created this public resource to help.
The guide makes two key points: Today's AIs are probably not conscious, but we cannot be certain. Current AIs are highly skilled at appearing conscious, and humans are prone to projecting agency onto them. But it's still important to take AI consciousness seriously—future systems could be conscious, and that possibility demands preparation.
Visit WhenAISeemsConscious.org →We surveyed 67 experts about whether, when, and how digital minds might be created. Key findings: It's very likely that digital minds are possible in principle, with a 50% median estimate that they will be created by 2050.
Conditional on digital minds arriving by 2040, their collective welfare capacity could exceed humanity's within a decade. There's nothing approaching consensus on whether their welfare will be positive or negative—humanity's poor track record with vulnerable groups is reason for concern.
Read the full report →A selective residential fellowship for early- and mid-career researchers working on digital minds, AI consciousness, and AI welfare.
A 5-day intensive residential programme hosted at Cambridge University. The fellowship enables deep, cross-disciplinary engagement across philosophy, social science, technical research, policy, and governance, fostering shared norms, judgment, and coordination capacity.
A two-day, output-oriented workshop bringing together fellows, mentors, and invited experts for structured research on digital minds.
Systematic evaluation of candidate interventions across effectiveness, tractability, risks, and interaction with AI safety.
Research PaperStructured elicitation of views on timelines, key uncertainties, and concrete policy recommendations.
Policy DocumentDeveloping scenarios through ~2100 and generating novel strategic intervention ideas.
Strategic ReportFellowship alumni automatically attend.
Learn MoreBuild foundational understanding of digital minds, AI consciousness, and AI welfare through our facilitated online programme.
An 8-week facilitated course designed to equip participants with shared vocabulary and conceptual grounding. Learn to reason well under uncertainty and engage responsibly with emerging debates on AI consciousness and moral status.
Cambridge Digital Minds is a research initiative at the University of Cambridge focused on societal preparedness for digital minds.
Help society prepare to make accurate, ethical, and well-reasoned decisions around digital minds, including questions of consciousness, welfare, and moral status, even under conditions of deep uncertainty and disagreement.
We focus on improving societal decision-making under uncertainty by anticipating societal dynamics, clarifying areas of expert agreement and disagreement, developing responsible frameworks and standards, and creating credible institutions that can inform policy and public debate.