Adam Bales
  • Home
  • Home

Hi, I'm Adam Thomas Bales

One of my earliest memories is of a bee stinging me while I was eating mashed potatoes. Then some other events happened, and now I'm an academic philosopher and an amateur writer of fiction.

As Academic Philosopher

Under the name Adam Bales, I am a senior research fellow in philosophy at Oxford University's Global Priorities Institute (and a research fellow at Wolfson College). 

Artificial Intelligence
​
Currently, I'm most interested in two threads of philosophical and ethical issues raised by artificial intelligence.


The first thread relates to extremely sophisticated AI systems that are (either individually or collectively) more generally cognitively capable than humans. Such systems do not currently exist, but some people think they'll be developed in the coming decades. Some of these people also think that such systems could lead to catastrophe, perhaps killing hundreds of thousands of people (or, more dramatically, leading to human extinction).

Both the timeframe and the catastrophe claims strike me as extremely strong, and I haven't yet seen evidence that convinces me to accept them. Nevertheless, I'm glad that some people are reflecting on the potential consequences of such systems. Quite generally, when humanity finds itself in unprecedented situations, I appreciate people reflecting on potential risks. I'm glad that people reflected on the possibility of nuclear winter before this became established science. Likewise I'm glad for early work on the impacts of greenhouse gases. And for similar reasons, I'm glad that some people are considering the potential risks of sophisticated AI, even as I remain uncertain about this work's ultimate value.

In relation to this thread, I am currently writing a paper exploring whether expected utility theory is likely to provide a promising way of modelling sophisticated AI systems (spoiler: I suspect not). I have previously co-authored a paper exploring whether we should expect such systems to seek power and act deceptively. And I contributed to a white paper on the idea of AI truthfulness.

The second thread relates to ethical challenges raised by less-sophisticated AI systems, including those that now exist. Here, I am worried about impacts on employment and inequality, as well as about the potential of AI to both perpetuate existing injustices and introduce new ones. I am also interested in the new challenges AI raises for moral agency, including the difficulties with attributing moral responsibility for the actions of AI systems and the risk that using AI to make decisions might lead to moral deskilling, as humans become less capable of themselves engaging with moral nuance.

I am not currently doing work in relation to this thread, but I am actively developing the background to do so. If you have recommendations of things that I should read in this area, I'd love to hear from you.

Supererogation and Decision Theory
I am also interested in work on supererogation and decision theory.

Supererogation is the idea that some actions go above and beyond the call of moral duty. Setting aside morality for a moment, in other areas of our lives we often aspire to be more than minimally competent. We might aspire to be good partners, not merely adequate ones. Or we might aspire to be good at football, or at writing, or at knitting, or at navigating in the woods. I am interested in the way that we can take a similar attitude to morality, aspiring to be more than minimally adequate. As a result, I am interested in supererogatory moral decision making.

So far, I have co-authored papers on the idea of rational supererogation and on how supererogation plays a role when we consider our decisions holistically rather than in isolation. I am currently writing a paper arguing (contra Christian Tarsney) that supererogation remains important even once we account for moral uncertainty.

Decision theory is a mathematical and philosophical theory of choice. I have previously defended a particular form of this theory (causal decision theory) against objections. I have also written a number of other papers in this area. I am not actively pursuing further decision theoretic work, though I think it's likely that I will end up doing further work in this area at some point.

As Amateur Writer
​
Under the name Thomas Bales, I write short fiction, often in the genre of speculative fiction. Perhaps one day I will link some of my writing here, but for now this part of the website is more aspirational than active.
Powered by Create your own unique website with customizable templates.