Policy Reform

Governments might be the most powerful force on earth. They collectively control $35 trillion and are among our best means of solving large-scale coordination problems. Research in legal theory and the social sciences confirms that countries’ laws and policies have a significant and lasting effect on moral norms and attitudes.

But often, today’s governments are mired in short-term thinking. Elected officials are primarily incentivised towards actions that will pay off during the election cycle. Estimates of the financial impacts of legislation typically extend no more than a decade. Meanwhile, technological literacy is decreasing — governments simply cannot have expertise in every industry, given the pace at which our economy is evolving.

Our grants in policy reform seek to develop leading expertise on the major risks facing humanity and make tractable recommendations to the governments and intergovernmental organisations most important for developing regulatory policy on AI and biotechnology — to ensure that the scientific basis for risks is considered alongside potential economic benefits.

Are you a major philanthropist seeking to learn more about these areas? Get in touch with our Co-CEOs Natalie Cargill and Simran Dhaliwal at natalie@longview.org and simran@longview.org.

Existential risk policy at the OECD
Existential risk policy at the OECD
Supporting global cooperation on extreme risks.

The Organisation for Economic Cooperation and Development (OECD) is a leading forum for international coordination on expert policy advice and best governance practices. Their work supports 38 member countries — including the US and the EU — and non-member partners, including China. Their data analysis underpins much work by the UN and the G20. The OECD is unusually consequential for multilateral cooperation on artificial intelligence, in that they host the dominant international institution for AI governance, the AI Policy Observatory. We have an ongoing partnership with the OECD to support their work on existential risk policy across several directorates, including the Strategic Foresight Unit, the High-Level Risk Forum, the AI Policy Observatory, and the Secretariat on Bio-, Nano-, and Converging Technologies.

AI policy at the Brookings Institution
AI policy at the Brookings Institution
Creating high-quality economics expertise on transformative AI.

The Brookings Institution is, by most measures, the most important think tank in the world. Their work has contributed to the creation of the United Nations, the Marshall Plan, and the Congressional Budget Office. Now they are turning their attention to transformative AI. We supported Dr Anton Korinek, one of the world’s most senior economists working on existential risks from AI, to explore work on the economics of transformative AI in collaboration with the Brookings Center for Regulation and Markets.

The Centre for Long-Term Resilience
The Centre for Long-Term Resilience
Making existential risk a mainstream discussion among UK policymakers.
The Centre for Long-Term Resilience is a small team educating UK leaders about the risks humanity faces. On an ongoing basis, they build relationships with policymakers at the Cabinet Office, the House of Commons, and the House of Lords, and present them with the latest longtermist research from institutions like the Future of Humanity Institute. The discussions they facilitate are vital to promoting future-protective policies, such as greater investment in rapid vaccine deployment platforms.