Code.org Ethics of AI Series Permalink
Two videos with code.org: Equal Access and Algorithmic Bias and Privacy & the Future of Work, and a panel exploring the ethics of AI.
Two videos with code.org: Equal Access and Algorithmic Bias and Privacy & the Future of Work, and a panel exploring the ethics of AI.
AI systems can fail unexpectedly, be used in ways their creators didn’t anticipate, or having unforeseen social consequences. But the same can be said of cars and pharmaceuticals, so why should we think there are any unique policy challenges posed by AI? In this talk, I’ll point out that the primary mechanisms for preventing these sorts of failures in other industries are not currently well-suited to AI systems. I then discuss the ways that engineers can help meet these policy challenges.
Ashley J. Llorens, Michael Horowitz, Kara Frederick, Elsa B. Kania, and I discuss United States leadership in artificial intelligence in a new era of global competition.
A discussion of AI & dual use at Stanford’s AI Salon with Miles Brundage (OpenAI), Dr. Megan J Palmer (Stanford University), and Dr. Allison Berke (Stanford University).
In this talk I outline the standard incentives to produce safe products: market incentives, liability law, and regulation. I argue that if these incentives are too weak because of information asymmetries or other factors, competitive pressure could cause firms to invest in safety below a level that is socially optimal.
It has been argued that competitive pressures could cause AI developers to cut corners on the safety of their systems. If this is true, however, why don’t we see this dynamic play out more often in other private markets? In this talk I outline the standard incentives to produce safe products: market incentives, liability law, and regulation. I argue that if these incentives are too weak because of information asymmetries or other factors, competitive pressure could cause firms to invest in safety below a level that is socially optimal. I argue that, in such circumstances, responsible AI development is a kind of collective action problem. I then develop a conceptual framework to help us identify levers to improve the prospects for cooperation in this kind of collective action problem.
In this episode of The 80,000 Hours podcast, Miles Brundage, Jack Clark, and I discuss a range of topics in AI policy, including the most significant changes in the AI policy world over the last year or two, how much the field is still in the phase of just doing research versus taking concrete actions, how should we approach possible malicious uses of AI, and publication norms for AI research.
TWIML hosted a live-streamed panel discussion exploring the issues surrounding OpenAI’s GPT-2 announcement. The discussion explores issues like where GPT-2 fits in the broader NLP landscape, why OpenAI didn’t release the full model, and the best practices in honest reporting of new ML results.
Joshua Greene (Harvard), Nick Bostrom (FHI), moderator Lucas Perry (FLI) and I discuss the question of ‘What goal should civilization should strive for?’ at the Future of Life Institute’s Beneficial AGI conference.
When an AI wins a game against a human, that AI has usually trained by playing that game against itself millions of times. When an AI recognizes that an image contains a cat, it’s probably been trained on thousands of cat photos. So if we want to teach an AI about human preferences, we’ll probably need lots of data to train it. In this talk, I explore ways that social science might help us steer advanced AI in the right direction.
In this episode of The 80,000 Hours podcast we cover a range of topics including the problem of ‘moral cluelessness’, whether there an ethical difference between prison and corporal punishment, how to resolve ‘infinitarian paralysis’, how we should think about jargon, and having moral empathy for intellectual adversaries.
With Tyler John (co-author and co-presenter). Many people try to offset the harm their behaviors do to the world. People offset carbon, river and air pollution, risky clinical trials, and consuming animal products. Yet it is not clear whether and when such offsetting is permissible. We give a preliminary conceptual analysis of moral offsetting. We then show that every comprehensive moral theory faces a trilemma: (i) any bad action can in principle be permissibly offset; (ii) no actions can in principle be permissibly offset; or (iii) there is a bad act that can be permissibly offset such that another act worse by an arbitrarily small degree cannot be offset, no matter how great the offset.
In this episode of Rationally Speaking I argue that it’s much trickier to rebut Pascal’s Wager than most people think. Julia and I also discuss how to handle other decisions where a risk has very low probability but would matter a lot if it came true – should you round them down to zero? Does it matter how measurable the risk is? And should you take into account the chance you’re being scammed?
When faced with ethical decisions, we generally prefer to act on more evidence rather than less. If the expected value of two options available to us are similar but the expected value of one option is based on more evidence than the the other is, then we will generally prefer the option that has more evidential support. In this talk, I argue that although we are intuitively disinclined to favor interventions with poor evidential support, there are reasons for thinking that these are sometimes better than favoring interventions with a proven track record.