Responsible AI development as a collective action problem


It has been argued that competitive pressures could cause AI developers to cut corners on the safety of their systems. If this is true, however, why don’t we see this dynamic play out more often in other private markets? In this talk I outline the standard incentives to produce safe products: market incentives, liability law, and regulation. I argue that if these incentives are too weak because of information asymmetries or other factors, competitive pressure could cause firms to invest in safety below a level that is socially optimal. I argue that, in such circumstances, responsible AI development is a kind of collective action problem. I then develop a conceptual framework to help us identify levers to improve the prospects for cooperation in this kind of collective action problem.

Leave a Comment