As machine learning becomes increasingly widespread in the real world, there has been a growing set of well-documented potential harms that need to be acknowledged and addressed. In particular, valid concerns about data privacy, algorithmic bias, automation risk, potential malicious uses, and more have highlighted the need for the active consideration of critical ethical issues in the field. In the light of this, there have been calls for machine learning researchers to actively consider not only the potential benefits of their research but also its potential negative societal impact, and adopt measures that enable positive trajectories to unfold while mitigating risk of harm. However, grappling with ethics is still a difficult and unfamiliar problem for many in the field. A common difficulty with assessing ethical impact is its indirectness: most papers focus on general-purpose methodologies (e.g., optimization algorithms), whereas ethical concerns are more apparent when considering downstream applications (e.g., surveillance systems). Also, real-world impact (both positive and negative) often emerges from the cumulative progress of many papers, so it is difficult to attribute the impact to an individual paper. Furthermore, regular research ethics mechanisms such as an Institutional Review Board (IRB) are not always a good fit for machine learning and problematic research practices involving extensive environmental and labor costs or inappropriate data use are so ingrained in community norms that it can be difficult to articulate where to draw the line as expectations evolve. How should machine learning researchers wrestle with these topics in their own research? In this panel, we invite the NeurIPS community to contribute questions stemming from their own research and other experiences, so that we can develop community norms around AI ethics and provide concrete guidance to individual researchers.