About Me

I’m a philosopher working on AI alignment at Anthropic. Before my current role I worked as a research scientist on the policy team at OpenAI, where I focused on on topics like responsible AI development, AI safety via debate, and human baselines for ML performance.

I’ve been involved in the effective altruism movement since around 2010. I’m a member of Giving What We Can and appear on the 80,000 hours podcast a couple of times.

I have a PhD in philosophy from NYU with a thesis on infinite ethics and a BPhil in philosophy from the University of Oxford. My philosophy work is mostly in ethics, decision theory, and formal epistemology.