Speaking

Code.org Ethics of AI Series Permalink

December 08, 2020

Code.org

Two videos with code.org: Equal Access and Algorithmic Bias and Privacy & the Future of Work, and a panel exploring the ethics of AI.

Girl Geek X: What is AI Policy? Permalink

September 10, 2019

OpenAI Girl Geek Dinner 2019

AI systems can fail unexpectedly, be used in ways their creators didn’t anticipate, or having unforeseen social consequences. But the same can be said of cars and pharmaceuticals, so why should we think there are any unique policy challenges posed by AI? In this talk, I’ll point out that the primary mechanisms for preventing these sorts of failures in other industries are not currently well-suited to AI systems. I then discuss the ways that engineers can help meet these policy challenges.

Best Practices for Dual Use Research

May 17, 2019

AI Salon: Stanford Artificial Intelligence Laboratory

A discussion of AI & dual use at Stanford’s AI Salon with Miles Brundage (OpenAI), Dr. Megan J Palmer (Stanford University), and Dr. Allison Berke (Stanford University).

Having our cake and eating it too: How to develop AI competitively without falling victim to collective action problems Permalink

April 29, 2019

Berkman Klein Center for Internet & Society, Harvard University

In this talk I outline the standard incentives to produce safe products: market incentives, liability law, and regulation. I argue that if these incentives are too weak because of information asymmetries or other factors, competitive pressure could cause firms to invest in safety below a level that is socially optimal.

Responsible AI development as a collective action problem

April 28, 2019

Effective Altruism Global X: Boston 2019 & EAG London 2018

It has been argued that competitive pressures could cause AI developers to cut corners on the safety of their systems. If this is true, however, why don’t we see this dynamic play out more often in other private markets? In this talk I outline the standard incentives to produce safe products: market incentives, liability law, and regulation. I argue that if these incentives are too weak because of information asymmetries or other factors, competitive pressure could cause firms to invest in safety below a level that is socially optimal. I argue that, in such circumstances, responsible AI development is a kind of collective action problem. I then develop a conceptual framework to help us identify levers to improve the prospects for cooperation in this kind of collective action problem.

Publication norms, malicious uses of AI, and general-purpose learning algorithms Permalink

March 19, 2019

The 80,000 Hours Podcast

In this episode of The 80,000 Hours podcast, Miles Brundage, Jack Clark, and I discuss a range of topics in AI policy, including the most significant changes in the AI policy world over the last year or two, how much the field is still in the phase of just doing research versus taking concrete actions, how should we approach possible malicious uses of AI, and publication norms for AI research.

OpenAI’s GPT-2 Language Model Permalink

February 24, 2019

The TWIML AI Podcast

TWIML hosted a live-streamed panel discussion exploring the issues surrounding OpenAI’s GPT-2 announcement. The discussion explores issues like where GPT-2 fits in the broader NLP landscape, why OpenAI didn’t release the full model, and the best practices in honest reporting of new ML results.

BAGI Panel: What Goal Should Civilization Strive For? Permalink

January 07, 2019

Beneficial AGI 2019

Joshua Greene (Harvard), Nick Bostrom (FHI), moderator Lucas Perry (FLI) and I discuss the question of ‘What goal should civilization should strive for?’ at the Future of Life Institute’s Beneficial AGI conference.

AI safety needs social scientists Permalink

October 28, 2018

Effective Altruism Global: London 2018 & San Francisco 2019

When an AI wins a game against a human, that AI has usually trained by playing that game against itself millions of times. When an AI recognizes that an image contains a cat, it’s probably been trained on thousands of cat photos. So if we want to teach an AI about human preferences, we’ll probably need lots of data to train it. In this talk, I explore ways that social science might help us steer advanced AI in the right direction.

Moral empathy, the value of information & the ethics of infinity Permalink

September 11, 2018

The 80,000 Hours Podcast

In this episode of The 80,000 Hours podcast we cover a range of topics including the problem of ‘moral cluelessness’, whether there an ethical difference between prison and corporal punishment, how to resolve ‘infinitarian paralysis’, how we should think about jargon, and having moral empathy for intellectual adversaries.

Moral Offsetting Permalink

August 11, 2017

Rocky Mountain Ethics Congress 2017 & Effective Altruism Global: San Francisco 2017

With Tyler John (co-author and co-presenter). Many people try to offset the harm their behaviors do to the world. People offset carbon, river and air pollution, risky clinical trials, and consuming animal products. Yet it is not clear whether and when such offsetting is permissible. We give a preliminary conceptual analysis of moral offsetting. We then show that every comprehensive moral theory faces a trilemma: (i) any bad action can in principle be permissibly offset; (ii) no actions can in principle be permissibly offset; or (iii) there is a bad act that can be permissibly offset such that another act worse by an arbitrarily small degree cannot be offset, no matter how great the offset.

Pascal’s Wager and other low risks with high stakes Permalink

August 06, 2017

Rationally Speaking

In this episode of Rationally Speaking I argue that it’s much trickier to rebut Pascal’s Wager than most people think. Julia and I also discuss how to handle other decisions where a risk has very low probability but would matter a lot if it came true – should you round them down to zero? Does it matter how measurable the risk is? And should you take into account the chance you’re being scammed?

The Moral Value of Information Permalink

June 04, 2017

Effective Altruism Global: Boston 2017

When faced with ethical decisions, we generally prefer to act on more evidence rather than less. If the expected value of two options available to us are similar but the expected value of one option is based on more evidence than the the other is, then we will generally prefer the option that has more evidential support. In this talk, I argue that although we are intuitively disinclined to favor interventions with poor evidential support, there are reasons for thinking that these are sometimes better than favoring interventions with a proven track record.