We focus on global challenges that could transform our future for many generations to come, including safely navigating emerging technologies and fostering a society with a more long-term outlook.
Our grantmaking is all about finding the best funding opportunities in the world. The past decade has seen the birth and growth of a global network of researchers dedicated to finding out how to do as much good as possible. These researchers work in academic institutions such as the Global Priorities Institute at the University of Oxford, and nonprofits like Open Philanthropy and The Gates Foundation.
We work with these researchers to investigate grants that represent the current best funding opportunities at the margin. One key insight from this research is cause prioritisation. Causes differ greatly in their scale, their tractability, and their neglectedness. To maximize our impact, we therefore need to rigorously prioritise between causes using the best available evidence.
Cause prioritisation leads us to focus on opportunities to protect future generations.
The future is vast in scale: depending on our choices in the coming centuries, the future could stretch for eons or it could dwindle into oblivion, and be inordinately good or inordinately bad. Future generations are utterly disenfranchised by governments and market mechanisms. And although affecting the long-term future is difficult, there are things we can do today that will have a lasting impact. In particular, we have identified three categories of opportunities where ambitious grantmaking could have significant and long-lasting effects:
1. Reducing Existential Risk: Interventions to prevent global catastrophes that could permanently curtail humanity’s long-term potential. Bioterrorism, artificial intelligence, extreme climate change, and nuclear war all threaten to reverse millennia of human progress, and each risk can be mitigated with carefully-targeted philanthropic capital.
2. Values and Institutions for the Long Term: Improvements to humanity’s values and institutions to ensure a future beneficial to all. We seek to avoid a world where advanced technology or the concentration of political power allows one group to hold a monopoly on the values that shape the future. We work to improve institutional decision-making with the latest decision science and ensure that long-term thinking is central in policymaking.
3. Prioritisation Research: Research to inform our actions to positively affect the long-term future, and to help us find better and better opportunities to improve the long-run trajectory of civilisation.
While many of our grant recommendations are robustly beneficial, we are firm believers in taking risky bets on promising new ideas. History shows that philanthropy is at its best when operating on the model of venture capital, to fund neglected opportunities with a limited track record but significant potential for a massive philanthropic upside in the event of success.
1. REDUCING EXISTENTIAL RISK
ACTING FASTER AGAINST PANDEMICS
A study led by researchers at the University of Oxford that aims to validate a new diagnostic tool that uses nanopore sequencing to detect emerging infectious diseases early. If widely adopted, it would allow medical professionals to start testing people much faster than was possible for COVID-19 and could therefore prove crucial in containing future pandemics.
PREVENTING BIOLOGICAL CATASTROPHES
A research project by The Johns Hopkins Center for Health Security into new approaches to mitigate and prevent global catastrophic biological risks in collaboration with the Future of Humanity Institute, a multidisciplinary research institute at the University of Oxford focused on the analysis of existential risks.
PROMOTING SAFE AND BENEFICIAL AI
Research by the Center for Human-Compatible AI (CHAI) into provably beneficial AI and increasing the emphasis on safety in the wider AI field. Led by Professor Stuart Russell, co-author of the most widely-used textbook on AI, CHAI is one of the first academic research centres dedicated to the design of safe and reliably beneficial artificial intelligence systems.
SOLVING THE ALIGNMENT PROBLEM
Research by Ought on factored cognition, or how complex cognitive tasks can be broken down into semantically meaningful pieces. Founded by a former Stanford machine learning researcher, Ought’s work training artificial intelligence to imitate human deliberation data may play a role in aligning transformative AI with human values.
2. VALUES AND INSTITUTIONS FOR THE LONG TERM
IMPROVING RISK ASSESSMENT
A research project into the psychological processes that have inhibited rational decision-making about the effects of nuclear weapons, led by Decision Research. Founded by distinguished cognitive psychologist, Paul Slovic, Decision Research focuses on understanding human judgment, decision-making and risk.
SHAPING LONG-TERM POLICY
Launching Alpenglow, which aims to put long-term thinking at the heart of UK policymaking. Founded by a former UK policy advisor, Alpenglow converts the work of leading academics into actionable policy recommendations to solve long-term challenges in government, with a focus on artificial intelligence and biosecurity.
3. PRIORITISATION RESEARCH
DRIVING GLOBAL PRIORITIES RESEARCH
Launching the Forethought Foundation for Global Priorities Research, which aims to promote philosophy and social science research into how best to positively influence the long-term future. Working closely with the University of Oxford’s Global Priorities Institute, the Foundation offers global priorities research scholarships, fellowships and grants to students and scholars.
BUILDING A LONGTERMIST LEGAL FRAMEWORK
A Legal Priorities Project (LPP) at Harvard Law School focused on developing a longtermist legal research agenda and building the community of legal scholars who care about safeguarding future generations. LPP aims to positively influence laws and institutions to reduce existential risk and build a long-term perspective into national policymaking.