Objective Epistemic Consequentialism
Askell, Amanda. ‘Objective Epistemic Consequntialism.’ BPhil thesis, University of Oxford (2011).
Askell, Amanda. ‘Objective Epistemic Consequntialism.’ BPhil thesis, University of Oxford (2011).
Askell, Amanda. ‘Epistemic Consequentialism and Epistemic Enkrasia’. In Epistemic Consequentialism, edited by Kristoffer Ahlstrom-Vij and Jeff Dunn. Oxford: Oxford University Press, 2018
Askell, Amanda. ‘Pareto Principles in Infinite Ethics.’ PhD thesis, New York University (2018).
Irving & Askell, ‘AI Safety Needs Social Scientists’, Distill, 2019.
Askell, Amanda. ‘Prudential Objections to Atheism’. In A Companion to Atheism and Philosophy, edited by Graham Oppy. Wiley-Blackwell, 2019.
Askell, Amanda, Miles Brundage, and Gillian Hadfield. ‘The Role of Cooperation in Responsible AI Development.’ arXiv preprint arXiv:1907.04534 (2019).
Solaiman, Irene, et al. ‘Release strategies and the social impacts of language models.’ arXiv preprint arXiv:1908.09203 (2019).
Askell, Amanda. ‘Evidence Neutrality and the Moral Value of Information’. In Effective Altruism: Philosophical Issues, edited by Hilary Greaves and Theron Pummer. Oxford: Oxford University Press, 2019.
Miles Brundage, Shahar Avin, Jasmine Wang, and Haydn Belfield, et al. ‘Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims’. arXiv preprint arXiv:2004.07213 (2020).
Brown, Tom; Mann, Ben; Ryder, Nick; Subbiah, Melanie et al. ‘Language Models are Few-Shot Learners.’ arXiv preprint arXiv:2005.14165 (2020).
Radford, Alec & Kim, Jong Wook, et al. "Learning Transferable Visual Models From Natural Language Supervision." (2021).
Askell, Amanda. ‘Ensuring the Safety of Artificial Intelligence’. In The Long View: Essays on policy, philanthropy, and the long-term future. Edited by Natalie Cargill and Tyler M. John. First Strategic Insight Ltd, 2021.
Askell, Amanda; Bai, Yuntao; Chen, Anna; Drain, Dawn; Ganguli, Deep; Henighan, Tom; Jones, Andy Joseph, Nicholas; Ben Mann, et al. A General Language Assistantas a Laboratory for Alignment. arXiv preprint arXiv:2112.00861 (2021)