Why AI? Why now?
AI progress is moving forward at a rapid rate, with models growing in size, training compute, and algorithmic efficiency. A 2022 survey of 4,000 AI researchers estimated a 50% chance that AI could outperform humans at all tasks by 2059, and a 14% chance of outcomes as severe as human extinction. Analyses from 2025 suggest the length of tasks AI systems can complete is doubling every seven months, and training runs are projected to become up to 1,000,000× more compute-intensive between GPT-4 and the state-of-the-art in 2030.
The risks
While advancements in AI could have tremendous benefits, they also raise a number of acute risks:
- Misuse by malicious actors: AI can lower the barriers to developing biological weapons and conducting cyberattacks.
- Concentration of power: If AI automates human labor, AI companies could accumulate a vast amount of power. Alternatively, authoritarian governments could entrench their power with AI-enabled military and police forces.
- Geopolitical instability: Nations may race to deploy powerful AI first, prioritising speed over safety and increasing the risk of conflict. At the same time, AI could enable novel weaponry that undermines stability and destabilizes the global order.
- Catastrophic misalignment: Advanced AI systems could have goals of their own that diverge from human interests, potentially bringing them into violent conflict with humanity.
Why philanthropy?
As governments race to adapt and market incentives sideline safety, private markets systematically underinvest in managing AI risks. The dangers of advanced AI affect everyone, but profits are concentrated in a handful of companies, giving them little reason to prioritise societal interests over short-term gains. At the same time, companies and countries feel pressure to stay at the frontier of AI development, fearing that any slowdown in the name of safety could mean losing ground. This is where philanthropy matters: by advancing the public good, setting rules of the road, and ensuring that AI developers do not recklessly race ahead and cut corners on safety.
Why Longview?
At Longview, we fund the interventions most likely to shape the trajectory of advanced AI for the better. Our grantmaking supports cutting-edge technical research, policy development, and field-building efforts that address the most urgent risks—while preserving AI’s extraordinary potential. As of October 2025, we have directed over $85 million towards AI risk reduction. With deep expertise and a track record of catalytic giving, we help ensure that even small philanthropic investments can influence how the world’s most powerful technologies are developed and deployed.
Are you a major philanthropist seeking to learn more about this area? Get in touch with our CEO Simran Dhaliwal at simran@longview.org.