Longview has launched a Digital Sentience Fund for research or public education on the potential consciousness, sentience, moral status, or experiences of artificial intelligence systems.
We have approximately $2.5 million in vetted, shovel-ready, unfunded opportunities from our recently closed Digital Sentience Consortium. Interested in supporting this work? Get in touch with AI Programme Officer Zach Freitas-Groff.
Why this matters
Experts in both academia and AI companies have recently raised the possibility that AI models could soon be sentient. Everyday users of AI models sometimes deliberately treat them with kindness. Given the rapid development and spread of artificial intelligence, the questions of whether and when AI will be sentient, and how humanity should interact with potentially sentient AI, are becoming increasingly important.
Social understanding of emerging technologies may play a significant role in shaping the future. Currently, however, no established framework or reliable method exists for determining whether an AI system is sentient. In light of accelerating technological progress and future projections, we aim to support people working on these challenges. Careful work on digital sentience could inform decisions made by important institutions, shape the design of AI systems, or make social decision-making more sensible in unforeseen ways. We would like to ensure that digital beings, if they exist, flourish rather than suffer, and that AI systems promote wellbeing.
What we can do about this
We are happy to share information on our grantees and open funding opportunities upon request.
Field-building. Talent remains our top bottleneck. We are interested in high-leverage ways to bring more capable people into the field, ranging from technical researchers and philosophers to strategic thinkers and non-profit entrepreneurs. Some strong opportunities include:
- Talent pipelines such as Future Impact Group, which connect and support fellows working with top researchers for 3–12 months with stipends and management.
- Career transitions for individuals such as promising early career researchers from FIG, a non-profit entrepreneur scoping a new AI welfare organisation, and policy professionals in the UK and EU with an interest in digital sentience.
Research. Neuroscientists and ML researchers studying whether AI systems satisfy proposed indicators of consciousness, legal scholars investigating possible legal frameworks for representing digital minds, and social scientists modelling possible futures using surveys, expert workshops, and other methods to investigate the implications of sentient-seeming AI.
Applied Work. Laying the foundations for effective future work, including:
- Lab outreach: Supporting non-profits and researchers at organisations such as the NYU Center for Mind, Ethics, and Policy in their work to develop model assessments and developer guidance.
- Communications: Conducting careful public messaging through projects like a book tour for Jonathan Birch’s Edge of Sentience and strategic communications research that includes message testing and surveys.
- Policy groundwork: Early-stage work like scoping a new think tank and analysing the social, policy, and legal implications of digital sentience.
If you’re interested in more information, please reach out to Longview’s CEO, Simran Dhaliwal.