Posts by Collection

archives

Is the born this way message homophobic?

Published:

The message of “born this way“ is that your sexual orientation is something you’re born with rather than something you choose. This is considered an important point in the justification of gay rights. I’m a strong supporter of gay rights, but I realised just over a year ago that something about this slogan didn’t sit right with me. I’m now pretty confident that basing gay rights on the “born this way“ message can be pretty harmful to LGBT people and other oppressed groups.

Prison is no more humane than flogging

Published:

Many people believe that corporal punishmenthas no place in a modern criminal justice system. Imprisonment is seen as a more humane form of punishment, and it is one that is employed in most modern criminal justice systems. In this post I ask why we think that imprisonment is humane while corporal punishment is not. I think this should cause us to question the ethics of imprisoning people.

Can we offset immorality?

Published:

People offset bad actions in various ways. The most salient example of this is probably carbon offsetting, where we pay a company to reduce the carbon in the atmosphere by roughly the same amount that we put in. But there are arguably more mundane examples of acts that are intended to offset immoral behavior. In this post I ask what moral offsetting is and whether it is something we should be in favor of.

Vegetarianism, abortion, and moral empathy

Published:

When people disagree about moral issues, they often fail to treat the moral beliefs of those that they disagree with as genuine moral beliefs. They instead they treat them like mere whims or mild preferences. This shows a lack of what I call moral empathy. I argue that lacking moral empathy can be harmful and can prevent fruitful discussion on divisive topics.

Some noise on signaling

Published:

I ask what signaling is and argue that it’s a bad idea to simply accuse people of “signaling” because signaling can mean a lot of things. I also argue that not all signaling is bad.

Against jargon

Published:

It’s sometimes useful to introduce new terms into discourse, but new terms can increase communication efficiency but at the cost of accessibility and sometimes precision. In this post I outline the pros and cons of introducing new, domain-specific terms.

Transmitting credences and transmitting evidence

Published:

There is a longstanding debate about whether deliberation prevents us from making any predictions about actions. In this post I will argue for a weaker thesis, namely that deliberation limits our ability to predict actions.

Infinity and the problem of evil

Published:

Some fictional dialogues in which I explore whether God should create all good worlds and how this relates to the problem of evil.

Keep others’ identities small

Published:

I really like Paul Graham’s advice to “keep your identity small” - to avoid making groups or positions part of your identity if you want to remain unbiased. But I often want to add to it “and keep other people’s identity small too”.

Impossibility reasoning

Published:

It’s typical to teach and use sequential reasoning, but all sequential arguments can be reforumalted as impossibility results. Thinking and presenting arguments in terms of impossibility results rather than sequential arguments can be more fruitful than sequential reasoning.

Disagreeing with content and disagreeing with connotations

Published:

It’s possible to agree with the content of a piece of writing but but to think that the conclusions that many readers might draw from it are wrong. I think it’s useful to distinguish between these before criticizing the writing of others.

projects

publications

Objective Epistemic Consequentialism

In this thesis I construct and defend a position that I call objective epistemic consequentialism. Objective epistemic consequentialism states that we ought to believe a proposition P to a given degree if and only if doing so produces the most epistemic value. I argue that this offers a viable response not only to the question of what we should believe and why, but also to which decision procedures we should commit ourselves to, what is of final epistemic value, and what is the nature of epistemic oughts.

Recommended citation: Askell, Amanda. ‘Objective Epistemic Consequntialism.’ BPhil thesis, University of Oxford (2011). https://askell.io/publication/objective-epistemic-consequentialism

Epistemic Consequentialism and Epistemic Enkrasia

Published in Epistemic Consequentialism, 2018

In this chapter I investigate what the epistemic consequentialist will say about epistemic enkrasia principles: principles that instruct one not to adopt a belief state that one takes to be irrational. I argue that a certain epistemic enkrasia principle for degrees of belief can be shown to maximize expected accuracy, and thus that a certain kind of epistemic consequentialist is committed to such a principle. But this is bad news for such an epistemic consequentialist because epistemic enkrasia principles are problematic.

Recommended citation: Askell, Amanda. ‘Epistemic Consequentialism and Epistemic Enkrasia’. In Epistemic Consequentialism, edited by Kristoffer Ahlstrom-Vij and Jeff Dunn. Oxford: Oxford University Press, 2018 https://www.oxfordscholarship.com/view/10.1093/oso/9780198779681.001.0001/oso-9780198779681-chapter-13

Pareto Principles in Infinite Ethics

In this thesis I argue that ethical rankings of worlds that contain infinite levels of wellbeing ought to be consistent with the Pareto principle, which says that if two worlds contain the same agents and some agents are better off in the first world than they are in the second and no agents are worse off than they are in the second, then the first world is better than the second. I show that if we accept four axioms – the Pareto principle, transitivity, an axiom stating that populations of worlds can be permuted, and the claim that if the ‘at least as good as’ relation holds between two worlds then it holds between qualitative duplicates of this world pair – then we must conclude that there is ubiquitous incomparability between infinite worlds.

Recommended citation: Askell, Amanda. ‘Pareto Principles in Infinite Ethics.’ PhD thesis, New York University (2018). https://askell.io/publication/pareto-principles-in-infinite-ethics

AI Safety Needs Social Scientists

Published in Distill, 2019

Properly aligning advanced AI systems with human values will require resolving many uncertainties related to the psychology of human rationality, emotion, and biases. These can only be resolved empirically through experimentation — if we want to train AI to do what humans want, we need to study humans.

Recommended citation: Irving & Askell, ‘AI Safety Needs Social Scientists’, Distill, 2019. https://distill.pub/2019/safety-needs-social-scientists/

Prudential Objections to Atheism

Published in A Companion to Atheism and Philosophy, 2019

In order for prudential objections to atheism to get off the ground, we must believe that we can have prudential reasons for and against believing things. In this chapter, I argue that a modest version of this view is more plausible than it may initially seem. I then explore two kinds of prudential reasons for belief: personal benefits like consolation, health, and community; and Pascal’s contention that we are more likely to experience an infinitely good afterlife if we believe in God.

Recommended citation: Askell, Amanda. ‘Prudential Objections to Atheism’. In A Companion to Atheism and Philosophy, edited by Graham Oppy. Wiley-Blackwell, 2019. https://onlinelibrary.wiley.com/doi/10.1002/9781119119302.ch33

The Role of Cooperation in Responsible AI Development

In this paper, we argue that competitive pressures could incentivize AI companies to underinvest in ensuring their systems are safe, secure, and have a positive social impact. Ensuring that AI systems are developed responsibly may therefore require preventing and solving collective action problems between companies. We note that there are several key factors that improve the prospects for cooperation in collective action problems. We use this to identify strategies to improve the prospects for industry cooperation on the responsible development of AI.

Recommended citation: Askell, Amanda, Miles Brundage, and Gillian Hadfield. ‘The Role of Cooperation in Responsible AI Development.’ arXiv preprint arXiv:1907.04534 (2019). https://arxiv.org/abs/1907.04534

Release Strategies and the Social Impacts of Language Models

Large language models have a range of beneficial uses: they can assist in prose, poetry, and programming; analyze dataset biases; and more. However, their flexibility and generative capabilities also raise misuse concerns. This report discusses OpenAI’s work related to the release of its GPT-2 language model. It discusses staged release, which allows time between model releases to conduct risk and benefit analyses as model sizes increased. It also discusses ongoing partnership-based research and provides recommendations for better coordination and responsible publication in AI.

Recommended citation: Solaiman, Irene, et al. ‘Release strategies and the social impacts of language models.’ arXiv preprint arXiv:1908.09203 (2019). https://arxiv.org/abs/1908.09203

Evidence Neutrality and the Moral Value of Information

Published in Effective Altruism: Philosophical Issues, 2019

In this chapter, I consider whether there is a case for favoring interventions whose effectiveness has stronger evidential support, when expected effectiveness is equal. I argue that in fact the reverse is true: when expected value is equal one should prefer to invest in interventions that have less evidential support, on the grounds that by doing so one can acquire evidence of their effectiveness (or ineffectiveness) that may then be valuable for future investment decisions.

Recommended citation: Askell, Amanda. ‘Evidence Neutrality and the Moral Value of Information’. In Effective Altruism: Philosophical Issues, edited by Hilary Greaves and Theron Pummer. Oxford: Oxford University Press, 2019. https://global.oup.com/academic/product/effective-altruism-9780198841364

talks

The Moral Value of Information

Published:

When faced with ethical decisions, we generally prefer to act on more evidence rather than less. If the expected value of two options available to us are similar but the expected value of one option is based on more evidence than the the other is, then we will generally prefer the option that has more evidential support. In this talk, I argue that although we are intuitively disinclined to favor interventions with poor evidential support, there are reasons for thinking that these are sometimes better than favoring interventions with a proven track record.

Watch the talk here.

Pascal’s Wager and other low risks with high stakes

Published:

In this episode of Rationally Speaking I argue that it’s much trickier to rebut Pascal’s Wager than most people think. Julia and I also discuss how to handle other decisions where a risk has very low probability but would matter a lot if it came true – should you round them down to zero? Does it matter how measurable the risk is? And should you take into account the chance you’re being scammed?

Listen to the episode here.

Moral Offsetting

Published:

with Tyler John (co-author and co-presenter)

Many people try to offset the harm their behaviors do to the world. People offset carbon, river and air pollution, risky clinical trials, and consuming animal products. Yet it is not clear whether and when such offsetting is permissible. We give a preliminary conceptual analysis of moral offsetting. We then show that every comprehensive moral theory faces a trilemma: (i) any bad action can in principle be permissibly offset; (ii) no actions can in principle be permissibly offset; or (iii) there is a bad act that can be permissibly offset such that another act worse by an arbitrarily small degree cannot be offset, no matter how great the offset.

See the conference website here and prize announcement here.

AI safety needs social scientists

Published:

When an AI wins a game against a human, that AI has usually trained by playing that game against itself millions of times. When an AI recognizes that an image contains a cat, it’s probably been trained on thousands of cat photos. So if we want to teach an AI about human preferences, we’ll probably need lots of data to train it. In this talk, I explore ways that social science might help us steer advanced AI in the right direction.

Watch the talk here.

OpenAI’s GPT-2 Language Model

Published:

TWIML hosted a live-streamed panel discussion exploring the issues surrounding OpenAI’s GPT-2 announcement. The discussion explores issues like where GPT-2 fits in the broader NLP landscape, why OpenAI didn’t release the full model, and the best practices in honest reporting of new ML results.

Watch the episode here.

Publication norms, malicious uses of AI, and general-purpose learning algorithms

Published:

In this episode of The 80,000 Hours podcast, Miles Brundage, Jack Clark, and I discuss a range of topics in AI policy, including the most significant changes in the AI policy world over the last year or two, how much the field is still in the phase of just doing research versus taking concrete actions, how should we approach possible malicious uses of AI, and publication norms for AI research.

Listen to the episode here.

Responsible AI development as a collective action problem

Published:

It has been argued that competitive pressures could cause AI developers to cut corners on the safety of their systems. If this is true, however, why don’t we see this dynamic play out more often in other private markets? In this talk I outline the standard incentives to produce safe products: market incentives, liability law, and regulation. I argue that if these incentives are too weak because of information asymmetries or other factors, competitive pressure could cause firms to invest in safety below a level that is socially optimal. I argue that, in such circumstances, responsible AI development is a kind of collective action problem. I then develop a conceptual framework to help us identify levers to improve the prospects for cooperation in this kind of collective action problem.

Best Practices for Dual Use Research

Published:

A discussion of AI & dual use at Stanford’s AI Salon with Miles Brundage (OpenAI), Dr. Megan J Palmer (Stanford University), and Dr. Allison Berke (Stanford University).

Girl Geek X: What is AI Policy?

Published:

AI systems can fail unexpectedly, be used in ways their creators didn’t anticipate, or having unforeseen social consequences. But the same can be said of cars and pharmaceuticals, so why should we think there are any unique policy challenges posed by AI? In this talk, I’ll point out that the primary mechanisms for preventing these sorts of failures in other industries are not currently well-suited to AI systems. I then discuss the ways that engineers can help meet these policy challenges.

Watch the talk and discussion here.

teaching