fbpx

Templeton.org is in English. Only a few pages are translated into other languages.

OK

Usted está viendo Templeton.org en español. Tenga en cuenta que solamente hemos traducido algunas páginas a su idioma. El resto permanecen en inglés.

OK

Você está vendo Templeton.org em Português. Apenas algumas páginas do site são traduzidas para o seu idioma. As páginas restantes são apenas em Inglês.

OK

أنت تشاهد Templeton.org باللغة العربية. تتم ترجمة بعض صفحات الموقع فقط إلى لغتك. الصفحات المتبقية هي باللغة الإنجليزية فقط.

OK
Skip to main content
Back to Templeton Ideas

 

The Templeton Ideas Podcast is a show about the most awe-inspiring ideas in our world and the people who investigate them.


Transcripts of our episodes are made available as soon as possible. They are not fully edited for grammar or spelling.

Liane Young is a professor at Boston College, where she directs the Morality Lab, which explores concepts like virtue, social norms, identity, and belief formation. Dr. Young’s current research focuses on theory of mind and emotions in moral judgment and behavior. Her insights and findings have appeared in The New York Times, NPR, and more.


Ben: Liane, welcome to the Templeton Ideas podcast.

Liane: Thanks so much, Ben. It’s good to be here. 

Ben: I want to start with some personal questions about your intellectual journey because what you explore as a researcher is fundamental to being a human, and it’s important to understand where you are coming from as a human. So, just to start, where did you grow up, and what were you interested in as a child?

Liane: I grew up in a town called Wellesley, which is a western suburb of Boston. As a kid, I grew up figure skating competitively, playing ice hockey, and spending a lot of time at the ice rink; I was an anchor for a kid’s sports show on New England cable news called Kids Talk Sports, which thinking back was a pretty cool experience to have as a kid. I went to a progressive school in Boston. Where in my senior year, I ended up being an editor for the school paper, running pieces on, I think, controversial topics at the time and that remain controversial now on affirmative action, abortion, gun safety, and animal rights, so that’s a little bit of, background on, on me. 

Ben: It sounded like you were tangling with some big and thorny subjects from an early age. Hmm. When did you first get curious about moral and psychological questions? 

Liane: in high school, I was already struck by how intense moral disagreement could be and how impossible it seemed for people to change their minds about moral issues. I guess ironically, I was also on the debate team in middle school.

So maybe that fascination with persuasion and disagreement and belief updating started even earlier, but in high school, I remember reading an article in the Boston Globe about Peter Singer’s new appointment at the Center for Human Values at Princeton and some of his strong stances on biomedical ethical issues.

And just struck by how I didn’t share many of the stances that I read about, and yet the logic that I saw just seemed to be bulletproof. And so, I wondered about the extent to which we should rely on our intuitions, our emotions, on the one hand, versus principles, reasoned philosophy on the other, and how those two Might interact.

Ben: Did you have specific questions in Mind that led you to study psychology? 

Liane: It was very much these questions about moral conflict, both internal moral conflict and when we experience moral dilemmas, we truly don’t know what to do. We don’t know what the right answers are. When we confront conflict across people, across groups of people, how we can resolve those kinds of internal and external conflicts where do these intuitions come from in the first place, and the extent to which basic moral intuitions are shared? 

Ben: Oftentimes, we don’t share the same intuitions, even with the people that we’re close with, so how do we sort that out?

Liane: So, I was drawn to philosophy for that reason because it seemed like philosophy offered useful approaches to navigate those kinds of dilemmas and decisions, and then at some point, I realized that there had to be some psychological, biological basis for this kind of, fundamental capacity that all people have for making judgments, whether those judgments are accurate or inaccurate. We do it all the time, spontaneously, across close relationships, in the workplace, So I was interested in, again, how people do this, and how can we improve our moral decision-making.

Ben: You’ve just described a couple of different fields and perspectives to bring to the question. You’ve described philosophy and the biological roots of some of these intuitions, obviously psychology, and sociology. How did you come to bring together these different fields and perspectives? And was that an original or relatively new development when you started exploring morality from those linked, interdisciplinary perspectives?

Liane: I majored in philosophy as an undergrad, specifically to study moral intuitions in response to specific moral dilemmas. My transition to psychology was a natural one in that I wanted to study the same kinds of questions, but just from a scientific perspective. I wanted to be able to describe how people do this in the absence of clear right and wrong answers.

So, my introduction to moral philosophy, featured the trolley problem, which maybe many listeners in this podcast have heard of before, but the most familiar version is one in which a runaway trolley is headed down the tracks about to hit and kill five people.

The question is whether it is morally permissible to turn that trolley away from five people and onto one person instead. Killing one, but saving five. 

What do you do? Right. So again, in many of these scenarios, the fundamental question is whether it’s permissible to cause some amount of harm to maximize aggregate welfare and maximize good consequences for the greatest number of people. So, I was really drawn to these kinds of dilemmas, and early on, I was equally drawn to philosophers’ approach to solving these kinds of dilemmas by uncovering principles that made sense of conflicting intuitions across these different cases. So, taking that trolley problem that we started with, another version of that is one in which you are standing next to a man, and you have the option of pushing him off a bridge so that his body lands in front of that trolley, stopping the trolley from hitting the five people further down below.

And most ordinary folks think that it is impermissible, that it is forbidden to push the man off the bridge. 

Whereas it is permissible to say, flip a switch, turn a lever to turn that trolley away from five people and onto one person on a sidetrack. I was fascinated by how people can have these contradictory intuitions about these different cases.

And philosophers offered these normatively plausible, sensible principles for that puzzle, right? So, there are many solutions to this problem, but one such solution is called the doctrine of double effect, where there could be two effects to youraction of turning a trolley right?

You foresee that you will harm one person, but that is not your intention. You don’t mean to harm the one person on the sidetrack. So, there are two effects of your action, harming one but also saving five. And in the case where you push a man off a bridge, you intend to use his body to your end of saving the five people. In any case, this difference between intended harm and foreseen harm, I found very, very Compelling, and I thought it was a reasonable way to, again, make sense of these different psychological responses to these scenarios. 

At the time, an effective revolution was going on in social psychologyand neuroscience that suggested that affect, effective responses, and emotional intuitionswere really the psychological solution to this problem.

In one case, it felt worse to push a man off a bridge compared to simply flipping a switch or pulling a lever. And that is why philosophers and ordinary people distinguish between these two different kinds of cases. I wasn’t convinced.

I felt like emotions had to be part of the solution, but maybe not the entire solution, so I ended up studying psychology, and working at MIT with a neuroscientist, Rebecca Sachs, who had been doing a lot of work characterizing neural mechanisms. Supporting how people process information about people’s intentions. So, one of my very first projects as a graduate student was to look at how those brain regions that help us process information about intentions respond to these kinds of moral scenarios. Are they being recruited to discriminate between these kinds of cases in the way that I thought maybe some philosophers were doing? Or was the only difference between these kinds of cases, emotions, through and through? 

Ben: Fascinating. Could you step back and explain in layman’s terms what your research lab does? Can you introduce it a little bit to our audience? 

Liane: Sure, our lab is the morality lab. Much of what we do is to study human moral decision-making. What I mean by that is how people, children, and adults around the world make moral decisions, engage in moral behavior, and Evaluate other’s morally relevant behavior. What we don’t try to do is figure out what is the right thing to do. Again, that is for philosophers to figure out. But we try to figure out how ordinary people answer this question and the range of answers, the underlying psychologicaland neurological processes that give rise to these answers to tricky moral questions.

Ben: You just mentioned ordinary people and do have a question related to the trolley problem, which isthis is an extreme scenario that does a great job illustrating our moral intuitions and the variations among them. What connection do you see between those extreme moral dilemmas? and the mundane moral dilemmas that people face. What kind of light did they shed?

Liane: That is a great question As I mentioned, Before, I think these trolley problems highlight the role of intentions in our moral judgments, and I think we spontaneously infer and evaluate people’s intentions all the time in our everyday interactions.

It’s useful for us to know whether somebody sent a computer virus by accident or on purpose. It is useful to know to predict how to interact with people in the future, what they might do, whether they’re somebody to be friends with or somebody to be, uh, but again, I think we make these kinds of moral judgments all the time.

Just to give you a few examples, driving into work yesterday, I made several judgments of people I had never met just based on the signs that they had up in their yards. And I’m sure I’m not the only one to take note of people’s yard signs. I judge students who use I Judge chat GBT to write essays when they’re instructed not to.

I judge parents who send their kids to school with a fever. I judge my friends for arriving to dinner late. I judge people who make too many judgments of other people, and of course, they’re All sorts of positive judgments that I, and others, make as well. And so, I think what we want to do is capture the unique psychological processes that help us make these kinds of judgments without the everyday associations that people might have with more familiar scenarios, right?

And so, something like the trolley problem was initially constructed to capture people’s intuitions about biomedical ethical cases, a physician, for instance, could administer a high dose of pain medication with the intent to reduce pain in a patient, but with the foreseen effect that that dosage will cause that patient to die. The doctrine of double effect allows us to distinguish between what is intended and what is foreseen.

And that can be useful in figuring out how to interpret people’s actions in addition to making moral evaluations of those actions as well. But again, as you pointed out, what we as psychologists do is try to connect these abstract scenarios to ordinary scenarios that matter to us. 

Another way that we’ve done that is to look closely at how people make moral judgments and decisions across different social relationship contexts. It’s not enough to think about how people interact with or impact strangers, hypothetical others, but how we engage with and interact with people that we’re close to. It turns out that we make different sorts of judgments depending on the relationship context.

Ben: Stepping back to the big picture here, we’re used to thinking of morality in those philosophical contexts where we’re talking about right and wrong, capital R, capital W, good and evil, and thinking through these more abstract scenarios and trying to determine what is objectively right in a universal sense.

It sounds like there is a different way of looking at it from a psychological point of view where it’s, more contextualized in, the humans and how we make these decisions. So, I’m curious, how would you define morality as somebody researching it from a more psychological point of view?

Liane: I think that that is a challenging question in that. 

in some sense, morality Is a little bit of a moving target as people update their views about what counts as morally relevant in the first place. There could also be differences across groups, and across individuals, about what counts as moral or not. In addition, of course, to disagreement about what is better or worse within the domain of morality. I think more broadly, again, what we’re trying to discover as psychologists is how to describe the ways in which people figure out what is right and wrong, both from a third-person judgment perspective and how people evaluate it. the actions of other people as well as themselves, but also how people engage in moral behavior from a first-person perspective. How do we decide what to do in addition to evaluating those decisions and those actions? 

Ben: When we’re making these evaluations, you’ve said that intentions seem to matter a lot, that we care very much whether somebody meant to do harm or meant to do something or not, why do intentions seem to matter so much when it comes to moral judgments?

Liane: People think of intentions as a guide to Internal character, stable dispositional traits that help to predict future behavior. And this is ingrained in our culture, in our legal system, in religious principles, right? We have these common sayingsit’s the thought that counts, it’s what’s in your heart that matters. That’s not to say that there’s not individual variability, which we have certainly found and characterized, variability across cultures, which research has also discovered, but I think, by and large, intentions are central to how we understand people, how we not just evaluate people, but predict their future behavior, which might be equally important in helping us to navigate our future. 

Ben: Have you ever explored where the intentions paradigm might break down? And here I have something in Mind, which is good intentions do matter a lot, and somebody who has good intentions, for everything they do, can be given a lot of grace when it comes to moral judgments. But it’s not infinite. good intentions only get you so far. behaving wrong, even if you have a good heart, has a certain limit, in people’s point of view, I’m curious, have you ever looked at that and, have a sense of how to describe where intentions do break down in terms of giving people, enough to go on to give somebody a moral pass or understand their behavior in that context?

Liane: I think we all have the intuition that we give people that we’re close with the benefit of the doubt. So, if someone that you know well does something that seems a little bit shady, you might come up with some alternative explanation to explain away some bad behavior. 

We have some studies looking at how people update their impressions of close versus distant others, trying to answer this questionof when it is that we give people the benefit of the doubt versus updating our impressions studies. In the face of new and maybe unpleasant, unwelcome evidence, and again, it turns out that we do give people the benefit of the doubt, in part by using our theory of Mind, our capacities to think about intentions, to think about what else they could have meant to do, but as you mentioned, only to A certain extent, right?

And so, in all cases, I think moral judgment is complex. It’s multifactorial. It’s never going to be all one or all the other, all intentions or all outcomes, not to mention several other factors that shape our moral judgments and our moral decisions.

What are some other strong factors or surprising factors that seem to affect, kind of like the theory of gravity affects, the path of, light through space? Well, what are those gravitational objects that change our moral compass and cause it to go in a different direction?

In addition to factors that describe the event or the action itself, like an agent’s intentions or an action’s outcomes, I think internal moral values are A big source of what shapes our moral judgments and our moral decisions and what is tricky about the impact of moral values is that they often conflict and again, we see that kind of conflict playing out both in internal moral dilemmas, but also importantly, conflict across people and conflict across groups.

And so, I think what strikes me is that even in today’sdiscourse around issues like free speech and affirmative action. People share many of the same common core values that have to do with justice and fairness and protection of life and protection of rights and freedom. And yet, despite those shared moral values, we still end up with quite a lot of conflict.

So, again, I think our values and our principles do a lot of the work in shaping our decisions, what we think is right and wrong. they don’t necessarily lead to solutions, or resolutions, in the case of conflict. 

As I mentioned beforeI also think that relationship context matters quite a lot in how we make moral decisions. Weagain, often give people that we know, people that we’re close with, the benefit of the doubt. Sometimes, that’s for good reason because we have some evidence. Again, some context, some narrative for understanding, interpreting, and evaluating somebody’s behavior, and sometimes we do so in a biased, motivated fashion, either because we want to preserve the relationship or because our relationships reflect also on our own moral identities.

Identity is also a big factor inhow we make our moral judgments, which could include social identity, ideological beliefs, and other kinds of commitmentsthat guide our judgmentsso many, many different factors. 

We’re running a new, set of studies right now, we have a paper coming out on how people judge those who. try hard and overcome temptation and ultimately do the right thing, on the one hand, and on the other hand, how people judge those who are spontaneously good, people who find it very easy to do the right thing. And you can imagine, as philosophers have on both sides of this debate, right?

We have virtue ethicists and Aristotelians on one side and Kantians on the other, emphasizing the role of duty and obligation in shaping our moral decisions, that there’s variability in how we regard people who really put in a lot of moral effort and maybe deserve corresponding moral credit and those for whom it’s just easy and natural and automatic. It turns out by and large that we don’t give people too much credit for overcoming temptation. We would much prefer people who find it easy to do the right thing to do good. And that is especially true in the case of close social relationships. We don’t want spouses, for instance, who want credit for unloading the dishwasher, right?

We don’t want spouses or partners who find it really difficult to stay faithful. We want that to come easily and automatically. On the other hand, we might give colleagues some moral praise or credit if we know, this is an example of my life if we know that they’re trying very hard to be vegetarian when they are very tempted to eat meat.

So, I think there are many interesting cases where there is variability, but that’s to say that relationship context really matters in how we make these kinds of moral judgments.

Ben: Mm-hmm. Well, learning all this about how complicated. it is, you could easily imagine someone saying, you know, um, human beings just aren’t good at morality. We try, we try really hard. We have the best intentions, but we really don’t do it t very consistently, not fairly. So why don’t we hold off from making moral judgments at all? Do you think that would be a fair conclusion? I mean, at some point. could you imagine AI stepping in and giving us thesejudgments in a more systematic fashion? 

Liane: First, I think it is impossible to refrain from making moral judgments because we do it all the time and we can’t help ourselves. So, I don’t know if anyone would be up for that challenge. I think we can certainly strive to make better moral judgments and to note when we are being inconsistent or hypocritical. And I think often it’s easy to note those kinds of inconsistencies in other people and a little bit harder to reflect on our own actions and our own judgments. 

going back to our discussion about context, I think it’s okay to acknowledge that context matters, that relationships matter, and that some moral values should be more salient and relevant in some contexts and others in other contexts.

An example that comes to Mind is that we might ask those who are in power to exercise the virtue of humility, but we might ask those who are not in positions of power or those with minority viewpoints to exercise courage in speaking out. That’s just one example of where I think it’s clear that different virtues and different moral principles might matter differently depending on the person or the situation.

So, I think, simply giving up on all, Moral judgments, or all efforts to organize those moral judgments, makes less sense than trying to organize those kinds of contexts in ways that we, upon reflection, would endorse.

Ben: Well, Liane, I want to turn to some final questions. I want to ask you how you’ve applied some of the insights that have come out of your research and all that you’ve learned in becoming a scholar on moral psychology. did you apply that to your personal life? 

Liane: That’s a great question. I think where we started this conversation was with my observation in middle school and high school that it seems to be incredibly hard to change people’s moral minds, particularly their beliefs about the world. Issues that they care deeply about. I think one thing that I’ve Tried to be open to is changing my own Mind about moral issues

I’m more comfortable now with the idea that, um, What I believe now might not be what I believe in 10 years, and that is something I’m okay with. I will be curious to know how I change over the next decade of my life, whether that is in response to the ideas and experiences of the people around me, whether that is because of what I read or research.

I guess I’ll give you one maybe more concrete example. So right now, a big project in the lab focuses on virtue signaling and correspondingly virtue discounting. How and when it is that people interpret public signals of virtue as reputationally motivated as opposed to genuine and authentic?

I started thinking about these ideas at some point during the pandemic when my daughter asked me whether we could put a sign up in our yard supporting a particular cause. And I hesitated. I was reluctant because I was concerned about how that public signal would be perceived And I wondered how many others feel the same way, and how much easier it would be to spread pro-social causes if we Weren’t reluctant to do so if we weren’t worried about what others think and I know that so much of Evolutionary and social psychology is about people doing good to look good.

But this is something different. This is about Worrying that people think you’re doing good just to look good. Even when you genuinely believe in a particular causeSo I’ve been really interested in how we Both combat skepticism about public acts of virtue and how to combat people’s reluctance to spread the causes that they support.

I think that is something that I’m continuing to try to do in my own life, particularly in modeling for my young children.

Ben: This is the last question from me today, Liane. What gives you hope? 

Liane: One of the reasons it is hard to answer this question is because there’s hope on different fronts. I hope that we can begin to recover some of those fundamental values and principles that I’m sure people on every side of socialand political boundaries can agree on, can rally around.

I think the challenge is figuring out how to identify those, and how to excavate those in a way that feels meaningful. I am hopeful about cultivating those common values in the next generation in promoting a culture that doesn’t feel so much like us versus them, where folks must choose one side or the other, where folks like me included are trying to figure out the source of a statement to interpret or evaluate it. I wish that didn’t matter so much and I’m hopeful that in a future generation, it won’t matter so much.

Ben: Thank you so much, Liane.