Request for Proposals: Applied Work on Potential Digital Sentience and Society
Longview Philanthropy, Macroscopic Ventures, and The Navigation Fund invite proposals for applied work on the potential consciousness, sentience, moral status, or experiences of artificial intelligence systems. Funding may cover the founding of new organisations, the establishment of new programmes at existing organisations, support for applied academic work, and other types of projects not listed here. For new organisations, we may offer some operational support free of charge, based on what Longview and successful applicants determine to be appropriate.
Areas of interest
Our priorities are to help build the field of digital sentience, fund work with a near-to-medium-term path to concrete effects, and explore the political and social status of advanced AI systems. We are currently more interested in supporting work and individuals who approach the issue sensibly than in advocating for a particular viewpoint, particularly given the high level of fundamental uncertainties in the field.
Building the field of digital sentience
The main bottleneck to progress on digital sentience concerns is the small number of people working on this topic. To this end, we are excited about leveraged ways to grow the number of people and organisations in the space, such as conferences and workshops; targeted communications to expert audiences; or support and mentorship structures for researchers.
Better understanding the sentience and moral status of AI systems
A key gap for digital sentience work is the lack of research on what features correlate with sentience and to what extent AI systems possess these features. Existing philosophical theories give a range of criteria for determining which systems are conscious, and there has been some progress in applying these theories to AI systems (1, 2). However, the amount of research on this subject is at least an order of magnitude smaller than it ought to be. It would be useful to have more conceptual research that clearly delineates the structure of AI systems for comparison with philosophical theories. Empirical research that can bypass some philosophical debate would also be valuable, such as studying the likelihood of sentience or moral status in AI systems via introspection or self-reports (3, 4). There is likewise an urgent need for studying what would cause conscious AI systems to have valenced—that is, positive or negative—experiences (5).
Governance systems for a world with conscious or highly agentic AI
Human society may soon make lasting choices, consciously or unconsciously, about the position AI systems will have in it. We’d be excited to see work exploring how existing political systems might respond to AI that has preferences and makes choices, especially in a world where it might have—or be perceived to have—moral status. Possible ideas include trade or negotiation between humans and AI, citizens’ assemblies, expert panels, ‘if-then’ commitments, research protocols, or economic or political rights.
Aligning AI models with compassion and wellbeing
Current AI companies’ processes may determine the shape of future AI systems. We are interested in working to ensure that AI companies develop models that consider the interests of sentient beings and that, if they are sentient, their existence is positive.
Communications strategy for AI consciousness and welfare
The idea of conscious AI has already made its way into the news, and we need to prepare for more public discussions and debates on this matter in the coming years. We would be excited about message development, sentiment tracking, and other work to help digital sentience organisations effectively communicate their ideas to key decision makers and the general public. We would also be enthusiastic about tractable work to establish reasonable, publicly trusted experts on the topic.
Activities Eligible for Funding
The types of projects we may fund are broad. Below are some activities that we expect to constitute a large share of what we fund. We are open to other activities as long as they are consistent with our charitable purpose and relate to areas like those described above.
Support structures for independent researchers on digital sentience
Growing the number of individuals with technical expertise engaging with digital sentience is a key priority. Establishing a dedicated think tank or organisation to support practical research in this area would provide valuable infrastructure for advancing the field.
Seed funding for new organisations
There is room for many more people to work on issues related to digital sentience and the role of AI in society. New initiatives launched in the next year may be well positioned to play a leading role in key future moments for digital sentience.
Mentorship programmes for technical researchers
Organisations like the MATS Program, BlueDot Impact, and SPAR have helped to increase the number of people able to do technical research on AI safety. We’d like to support similar work for digital sentience.
Internships or hires focused on digital sentience and society and existing AI organisations or institutions
A key goal of this request for proposals is to increase the chance that decisions about AI in the coming years take questions related to the potential welfare of AI systems seriously and responsibly. We’d be excited to support individuals in upskilling on digital sentience via positions at existing organisations and institutions developing AI policies.
Funding for current organisations to establish digital sentience programmes
We are open to providing general funding for organisations with promising track records to work on digital sentience. We do not want to undermine organisations doing important work on other issues and expect to consider whether digital sentience work would play to an organisation’s strengths.
Requirements and Selection Criteria
Applications will be selected primarily on the basis of the following criteria:
- The potential impact of proposed activities if executed well, as demonstrated by the proposed theory of impact and evidence for its potential.
- The track record of those involved in the proposal.
- A demonstrated interest in digital sentience and related issues based on application answers, prior work, and references.
- Ideal applicants would be able to demonstrate an understanding of the interactions between digital sentience and AI-related existential risk, and would be committed to high-impact work of plausible practical relevance.
- Project cost.
- Compliant with all relevant local laws and able to use funds for charitable activities as defined by Section 501(c)(3) of the U.S. Internal Revenue Code.
To apply, please fill out the application here. The application requires the following:
- Basic grant details and logistics questions (e.g., biographical details for those involved)
- Resumes, CVs, or LinkedIn pages for key personnel that include prior employment, educational background, and key achievements
- Project proposal (two to five paragraphs)
- Mainline budget amount and a quick explanation of what you would do instead if you did not secure funding
For questions about the application, please email Zach Freitas-Groff. The application deadline is July 9, 2025.