Publications and Preprints

Papers and Book Chapters

Evidence Neutrality and the Moral Value of Information

Published in Effective Altruism: Philosophical Issues, 2019

In this chapter, I consider whether there is a case for favoring interventions whose effectiveness has stronger evidential support, when expected effectiveness is equal. I argue that in fact the reverse is true: when expected value is equal one should prefer to invest in interventions that have less evidential support, on the grounds that by doing so one can acquire evidence of their effectiveness (or ineffectiveness) that may then be valuable for future investment decisions.

Recommended citation: Askell, Amanda. ‘Evidence Neutrality and the Moral Value of Information’. In Effective Altruism: Philosophical Issues, edited by Hilary Greaves and Theron Pummer. Oxford: Oxford University Press, 2019. https://global.oup.com/academic/product/effective-altruism-9780198841364

Prudential Objections to Atheism

Published in A Companion to Atheism and Philosophy, 2019

In order for prudential objections to atheism to get off the ground, we must believe that we can have prudential reasons for and against believing things. In this chapter, I argue that a modest version of this view is more plausible than it may initially seem. I then explore two kinds of prudential reasons for belief: personal benefits like consolation, health, and community; and Pascal’s contention that we are more likely to experience an infinitely good afterlife if we believe in God.

Recommended citation: Askell, Amanda. ‘Prudential Objections to Atheism’. In A Companion to Atheism and Philosophy, edited by Graham Oppy. Wiley-Blackwell, 2019. https://onlinelibrary.wiley.com/doi/10.1002/9781119119302.ch33

AI Safety Needs Social Scientists

Published in Distill, 2019

Properly aligning advanced AI systems with human values will require resolving many uncertainties related to the psychology of human rationality, emotion, and biases. These can only be resolved empirically through experimentation — if we want to train AI to do what humans want, we need to study humans.

Recommended citation: Irving & Askell, ‘AI Safety Needs Social Scientists’, Distill, 2019. https://distill.pub/2019/safety-needs-social-scientists/

Epistemic Consequentialism and Epistemic Enkrasia

Published in Epistemic Consequentialism, 2018

In this chapter I investigate what the epistemic consequentialist will say about epistemic enkrasia principles: principles that instruct one not to adopt a belief state that one takes to be irrational. I argue that a certain epistemic enkrasia principle for degrees of belief can be shown to maximize expected accuracy, and thus that a certain kind of epistemic consequentialist is committed to such a principle. But this is bad news for such an epistemic consequentialist because epistemic enkrasia principles are problematic.

Recommended citation: Askell, Amanda. ‘Epistemic Consequentialism and Epistemic Enkrasia’. In Epistemic Consequentialism, edited by Kristoffer Ahlstrom-Vij and Jeff Dunn. Oxford: Oxford University Press, 2018 https://www.oxfordscholarship.com/view/10.1093/oso/9780198779681.001.0001/oso-9780198779681-chapter-13

Reports

Release Strategies and the Social Impacts of Language Models

Large language models have a range of beneficial uses: they can assist in prose, poetry, and programming; analyze dataset biases; and more. However, their flexibility and generative capabilities also raise misuse concerns. This report discusses OpenAI’s work related to the release of its GPT-2 language model. It discusses staged release, which allows time between model releases to conduct risk and benefit analyses as model sizes increased. It also discusses ongoing partnership-based research and provides recommendations for better coordination and responsible publication in AI.

Recommended citation: Solaiman, Irene, et al. ‘Release strategies and the social impacts of language models.’ arXiv preprint arXiv:1908.09203 (2019). https://arxiv.org/abs/1908.09203

Preprints

The Role of Cooperation in Responsible AI Development

In this paper, we argue that competitive pressures could incentivize AI companies to underinvest in ensuring their systems are safe, secure, and have a positive social impact. Ensuring that AI systems are developed responsibly may therefore require preventing and solving collective action problems between companies. We note that there are several key factors that improve the prospects for cooperation in collective action problems. We use this to identify strategies to improve the prospects for industry cooperation on the responsible development of AI.

Recommended citation: Askell, Amanda, Miles Brundage, and Gillian Hadfield. ‘The Role of Cooperation in Responsible AI Development.’ arXiv preprint arXiv:1907.04534 (2019). https://arxiv.org/abs/1907.04534

Theses

Pareto Principles in Infinite Ethics

In this thesis I argue that ethical rankings of worlds that contain infinite levels of wellbeing ought to be consistent with the Pareto principle, which says that if two worlds contain the same agents and some agents are better off in the first world than they are in the second and no agents are worse off than they are in the second, then the first world is better than the second. I show that if we accept four axioms – the Pareto principle, transitivity, an axiom stating that populations of worlds can be permuted, and the claim that if the ‘at least as good as’ relation holds between two worlds then it holds between qualitative duplicates of this world pair – then we must conclude that there is ubiquitous incomparability between infinite worlds.

Recommended citation: Askell, Amanda. ‘Pareto Principles in Infinite Ethics.’ PhD thesis, New York University (2018). https://askell.io/publication/pareto-principles-in-infinite-ethics

Objective Epistemic Consequentialism

In this thesis I construct and defend a position that I call objective epistemic consequentialism. Objective epistemic consequentialism states that we ought to believe a proposition P to a given degree if and only if doing so produces the most epistemic value. I argue that this offers a viable response not only to the question of what we should believe and why, but also to which decision procedures we should commit ourselves to, what is of final epistemic value, and what is the nature of epistemic oughts.

Recommended citation: Askell, Amanda. ‘Objective Epistemic Consequntialism.’ BPhil thesis, University of Oxford (2011). https://askell.io/publication/objective-epistemic-consequentialism