One of my earliest memories is of a bee stinging me while I was eating mashed potatoes. Then some other events happened, and now I'm an assistant director for philosophy (and senior research fellow) at Oxford University's Global Priorities Institute, as well as a research fellow at Wolfson College.
Artificial Intelligence
My primary research focus is on the implications of AI for relations of sociopolitical power. Some of these implications might involve AI impacting power relations between humans. For example, AI might lead to the centralisation of corporate power in a small number of tech companies or might bolster dictatorships by allowing for most sophisticated surveillance. Other implications of AI for power might come to involve power relations with AI itself, either involving humans disempowering AI (if AI systems become moral patients) or vice versa (if AI systems become able to seize power from humans). If you have thoughts about AI and power, I'd love to hear from you.
Supererogation and Decision Theory
I am also interested in work on supererogation and decision theory.
Supererogation is the idea that some actions go above and beyond the call of moral duty. Setting aside morality for a moment, in other areas of our lives we often aspire to be more than minimally competent. We might aspire to be good partners, not merely adequate ones. Or we might aspire to be good at football, or at writing, or at knitting, or at navigating in the woods. I am interested in the way that we can take a similar attitude to morality, aspiring to be more than minimally adequate.
Decision theory is a mathematical and philosophical theory of choice. I have previously defended a particular form of this theory (causal decision theory) against objections. I have also written a number of other papers in this area. I am not actively pursuing further decision theoretic work, though I think it's likely that I will end up doing further work in this area at some point.
Artificial Intelligence
My primary research focus is on the implications of AI for relations of sociopolitical power. Some of these implications might involve AI impacting power relations between humans. For example, AI might lead to the centralisation of corporate power in a small number of tech companies or might bolster dictatorships by allowing for most sophisticated surveillance. Other implications of AI for power might come to involve power relations with AI itself, either involving humans disempowering AI (if AI systems become moral patients) or vice versa (if AI systems become able to seize power from humans). If you have thoughts about AI and power, I'd love to hear from you.
Supererogation and Decision Theory
I am also interested in work on supererogation and decision theory.
Supererogation is the idea that some actions go above and beyond the call of moral duty. Setting aside morality for a moment, in other areas of our lives we often aspire to be more than minimally competent. We might aspire to be good partners, not merely adequate ones. Or we might aspire to be good at football, or at writing, or at knitting, or at navigating in the woods. I am interested in the way that we can take a similar attitude to morality, aspiring to be more than minimally adequate.
Decision theory is a mathematical and philosophical theory of choice. I have previously defended a particular form of this theory (causal decision theory) against objections. I have also written a number of other papers in this area. I am not actively pursuing further decision theoretic work, though I think it's likely that I will end up doing further work in this area at some point.