AI systems are now, by many measures, outperforming humans on tasks ranging from drug development to gaming to visual art. AI researchers and top technology investors expect this progress to continue. A 2022 survey of over 4,000 researchers publishing in NeurIPS and ICML (top AI conferences) concluded that it is 50% likely that unaided machines can accomplish every single task better and more cheaply than human workers by 2059.
Improvements in AI have enormous potential to better our world: increasing the speed and accuracy of medical diagnoses, reducing traffic accidents by making autonomous vehicles possible, facilitating personalised education, accelerating the development of sustainable energy, and more. But the risks of transformative AI are high. On average, respondents to the above mentioned survey of top AI researchers said that when unaided machines can accomplish every task better and more cheaply than human workers, the effect is 14% likely to be “extremely bad (e.g. human extinction)”.
In order to mitigate these risks, we need to determine how to build AI systems that reliably pursue their user’s goals. Moreover, we need to achieve sufficient coordination between companies and governments to ensure that these solutions are broadly implemented, and so that the long-term trajectory of our world is not determined by a malicious or reckless actor.