Research

Moral circle expansion

Our moral circle refers to who we do and do not think of as worthy of moral concern. My research aims to understand what shapes these judgments. This includes factors about the person making the judgement, and factors about the entity being judged. I am particularly interested in how we ascribe moral concern to distant others, such as people who live far away or who are not yet born, non-human animals, and artificial entities. In this work I also examine how children and adults differ in their ascriptions of moral worth - finding that children appear to be much more willing to grant moral concern to distant others. 


Unusually altruistic groups

Most people are kind and generous towards friends and family. But some people engage in acts of altruism towards distant others - such as non-directed kidney donation. My research aims to understand what is unique about those who engage in unusually altruistic acts (put another way, those who have an expansive moral circle). To date, I have conducted research with people who have taken the Giving What We Can pledge to donate at least 10% of their income to effective charities, and with children who choose to become vegetarian in meat-eating families. I am always interested in meeting and working with other altruistic groups or  individuals so please feel free reach out to me! 


Natural-is-better bias

We often see natural things as good and unnatural things as bad. In some cases, we even see them as moral and immoral.  But we know little about how these beliefs emerge or why they exist at all. My research aims to understand this. I am also particularly interested in naturalness in the context of cultured meat - a novel food technology that allows for the production of meat without industrial farming. My work shows that children as young as 5 years already prefer natural foods and that the rejection of cultured meat is associated with certain emotion-based traits such as digust sensitivity and concerns about purity. 


Moral psychology x AI 

Research in psychology typically aims to understand how we make moral judgments about and engage with AI systems. While valuable, researchers in AI safety instead tend to focus on questions of global cooperation and value alignment. I believe that there are a number of areas in which psychological research can contribute to these questions. For example, how can we apply findings from climate change psychology to consideration for the development and regulation of safe and aligned AI? How can our knowledge of human values inform the approaches taken to AI alignment, including which values we prioritize? And how can we our knowledge of biases in moral consideration (e.g., cognitive dissonance) help us to understand barriers to granting moral consideration towards future AIs?