Ensuring the Safety of Artificial Intelligence

Published:

Recommended citation: Askell, Amanda. ‘Ensuring the Safety of Artificial Intelligence’. In The Long View: Essays on policy, philanthropy, and the long-term future. Edited by Natalie Cargill and Tyler M. John. First Strategic Insight Ltd, 2021. https://da63870c-9860-42b7-8b51-fb15d5bae843.filesusr.com/ugd/07becb_8fe75ba08a4c457ba2988e96e1ee6654.pdf

Summary: Technological innovation has been a cause of both great flourishing and great risk in the history of humanity. Artificial intelligence stands out among other forms of current technology as having particularly great promise and great risk given that it could one day perform every kind of intellectual work. This chapter makes the case that our work today to ensure the safety of artificial intelligence could have a significant long-term impact on humanity, and we should therefore begin this work now. It begins by explaining in historical context why progress on AI is likely to have a significant effect on the long-term future. Though the trajectory of this progress and the nature of its effects are uncertain, we can act today to meaningfully alter these for the better. We can do so by gathering information about AI progress and impacts, investing resources into the safe development and responsible deployment of AI, and working to resolve collective action problems that threaten to undermine these efforts.

Read the paper here

Recommended citation: Radford, Alec & Kim, Jong Wook, et al. “Learning Transferable Visual Models From Natural Language Supervision.” (2021).

Leave a Comment