Sep 26, 2017

Driverless Dilemma

Most of us would sacrifice one person to save five. It’s a pretty straightforward bit of moral math. But if we have to actually kill that person ourselves, the math gets fuzzy.

That’s the lesson of the classic Trolley Problem, a moral puzzle that fried our brains in an episode we did about 11 years ago. Luckily, the Trolley Problem has always been little more than a thought experiment, mostly confined to conversations at a certain kind of cocktail party. That is until now. New technologies are forcing that moral quandry out of our philosophy departments and onto our streets. So today we revisit the Trolley Problem and wonder how a two-ton hunk of speeding metal will make moral calculations about life and death that we can’t even figure out ourselves.

This story was reported and produced by Amanda Aronczyk and Bethel Habte.

Thanks to Iyad Rahwan, Edmond Awad and Sydney Levine from the Moral Machine group at MIT. Also thanks to Fiery Cushman, Matthew DeBord, Sertac Karaman, Martine Powers, Xin Xiang, and Roborace for all of their help. Thanks to the CUNY Graduate School of Journalism students who collected the vox: Chelsea Donohue, Ivan Flores, David Gentile, Maite Hernandez, Claudia Irizarry-Aponte, Comice Johnson, Richard Loria, Nivian Malik, Avery Miles, Alexandra Semenova, Kalah Siegel, Mark Suleymanov, Andee Tagle, Shaydanay Urbani, Isvett Verde and Reece Williams.

Support Radiolab today at Radiolab.org/donate.

 

THE LAB sticker

Unlock member-only exclusives and support the show

Exclusive Podcast Extras
Entire Podcast Archive
Listen Ad-Free
Behind-the-Scenes Content
Video Extras
Original Music & Playlists

Speaker 1:
Oh, wait, you're listen (laughs).

Speaker 2:
Okay.

Speaker 1:
All right.

Speaker 2:
Okay.

Speaker 1:
All right.

Speaker 2:
You're listen ...

Speaker 1:
Listening ...

Speaker 2:
To Radiolab. From ...

Speaker 1:
WNYC.

Speaker 2:
C?

Speaker 1:
Yeah.

Jad Abumrad:
I'm Jad Abumrad.

Robert Krulwich:
I'm Robert Krulwich, and you know what this is.

Jad Abumrad:
It's Radiolab (laughs).

Robert Krulwich:
Yeah.

Jad Abumrad:
Okay, so we're going to play you a little bit of tape first, just to set up the ... what we're going to do today. About a month ago, we were doing the thing about the fake news.

Robert Krulwich:
Yeah, we were really worried about a lot of fake news. A lot of people are. But in the middle of doing that report, and we, we were talking with a fellow from Vanity Fair.

Nick Bilton:
My name is Nick Bilton. I'm a special correspondent for Vanity Fair. [crosstalk 00:00:41]

Robert Krulwich:
And in the course of our conversation, Nick ... And this had nothing to do with what we were talking about, by the way ... Nick just got into a sort of a, well he went into a kind of nervous reverie, I'd say.

Jad Abumrad:
Yeah, he was like, "You know, you guys want to talk about fake news, but that's not actually what's eating at me."

Nick Bilton:
The thing that I've been pretty obsessed with lately is actually not fake news, but it's automation and artificial intelligence and, um, and driverless cars. Because it's going to have a larger effect on society than any technology that I think has ever been created in the history of mankind. I know that's kind of a bold statement, but-

Jad Abumrad:
(laughing)

Jad Abumrad:
Quite bold.

Nick Bilton:
Uh, but you got to imagine that, you know, that there will be in the next 10 years, 20 to 50 million jobs that will just vanish at automation. Um, you've got, you know, million truckers that will lose their jobs, um, uh, the ... But it's not ... We think about, like, automation and driverless cars and we think about the fact that, uh, they are going to, uh ... The people that just drive the cars, like the taxi drivers and the truckers, are going to lose their jobs.

Nick Bilton:
What we don't realize is that there are entire industries that are built around just cars. So, for example, if you are not driving the car, why do you need insurance? There's no parking tickets because you're driverless car know where it can and cannot park and goes and finds a spot and moves and so on.

Nick Bilton:
If there are truckers that are no longer using rest stops because driverless cars don't have to stop and pee or take a nap, then all of those little rest stops all across America are affected. People aren't stopping to use the restrooms. They're not buying burgers. They're not staying in these hotels, and so on and so forth.

Nick Bilton:
And, and then, if you look driverless cars to a next level, the whole concept of what a car is is going to change. And so, for example, right now, a car has five seats and a wheel, but if I'm not driving, what's the point of having five seats and a wheel? You could imagine that you take different cars, so maybe when I was on my way here to this interview, I wanted to work out, so I called a driverless gym car. Or I have a meeting out in Santa Monica after this, and it's an hour, so I call a movie car to watch a movie on the way out there. Or office car and I pick up someone else and we have a meeting on the way.

Nick Bilton:
All of these things are going to happen, not in a vacuum, but simultaneously. This, you know ... Pizza delivery drivers are gonna be replaced by robots that will actually cook your pizza on the way to your house in a little box and then deliver it.

Nick Bilton:
And so, kind of a little bit of a long-winded answer, but I- I truly do think that, um, that it's gonna have a massive, massive effect on society.

Nick Bilton:
Am, am I stressing you guys out? Are you, are you having heart palpitations over there?

Jad Abumrad:
Yeah, yeah, my blood pressure has gone up.

Robert Krulwich:
This is not good. This is not good.

Robert Krulwich:
So that's a fairly compelling description of a, of a, um, of a very dangerous future.

Jad Abumrad:
Yes, but you know what? It's funny. Uh, one of the thing that ... I mean, we couldn't use that tape, initially at least.

Robert Krulwich:
Right.

Jad Abumrad:
But we kept thinking about it because it actually weirdly points us back to a story we did about a decade ago. The story of a moral problem that's about to get totally reimagined.

Robert Krulwich:
It may be that what Nick is worried about and what we were worried about 10 years ago have now come dangerously close together.

Jad Abumrad:
So what we thought we would do is we're, we're going to play you the story as we did it then, sort of the full segment. And then we're going to amend it on the back end.

Jad Abumrad:
And by way of just disclaiming, this was at a moment in our development where there's just, like, way too many sound effects, just gratuitous.

Robert Krulwich:
Well, you don't have to apologize for it. Those were great sound effects.

Jad Abumrad:
No, I'm, I'm gonna apologize 'cause there's just too much.

Robert Krulwich:
(laughs)

Jad Abumrad:
Just too much. And also, like, we, we, we talk about the fMRI machine like it's this, like, amazing thing (laughs) when it was, it's sorta commonplace now.

Jad Abumrad:
Anyhow, it doesn't matter. We're going to play it for you and then talk about it on the back end. This is ...

Jad Abumrad:
We start with a description of something called "the trolley problem." You ready?

Robert Krulwich:
Yep.

Jad Abumrad:
All right. You're gonna hear some train tracks. Go there in your mind.

Robert Krulwich:
Okay.

Jad Abumrad:
There are five workers on the tracks, working. They've got their backs turned to the trolley, which is coming in the distance.

Robert Krulwich:
You mean they're repairing the tracks?

Jad Abumrad:
They are repairing the tracks.

Robert Krulwich:
This is unbeknownst to them, the trolley is approaching?

Jad Abumrad:
They don't see it. You can't shout to them.

Robert Krulwich:
Okay.

Jad Abumrad:
And if you do nothing, here's what will happen. Five workers will die.

Robert Krulwich:
Oh my god (laughs). I ... That was a horrible experience. I don't want that to happen to them.

Jad Abumrad:
No, you don't. But you have a choice. You can do A, nothing. Or B, it so happens next to you is a lever. Pull the lever and the trolley will jump onto some side tracks where there is only one person working.

Robert Krulwich:
So if the (laughing) ... So if the trolley goes on the second track, it will kill the one guy.

Jad Abumrad:
Yeah, so there's your choice. Do you kill one man by pulling a lever or do you kill five men by doing nothing?

Robert Krulwich:
Well, I'm gonna pull the lever.

Jad Abumrad:
Naturally. All right, here's part two.

Jad Abumrad:
You're standing near some train tracks. Five guys are on the tracks, just as before, and there is the trolley coming.

Robert Krulwich:
I hear the train coming in the ... The five same, same five guys working on the track?

Jad Abumrad:
Same five guys.

Robert Krulwich:
Backs to the train, they can't see anything?

Jad Abumrad:
Yeah, yeah, exactly.

Jad Abumrad:
However, I'm gonna make a couple changes. Now you're standing on a footbridge that passes over the tracks. You're looking down onto the tracks. There's no lever anywhere to be seen, except, next to you, there is a guy.

Robert Krulwich:
What do you mean, "there's a guy?"

Jad Abumrad:
A large guy, large individual, standing next to you on the bridge, looking down with you over the tracks, and you realize, "Wait, I can save those five workers if I push this man, give him a little tap."

Jad Abumrad:
(laughing)

Jad Abumrad:
He'll land on the tracks and stop the tr-

Robert Krulwich:
And he stops the train.

Jad Abumrad:
Right.

Robert Krulwich:
Oh, yeah, I'm not gonna do that. I'm not gonna do that.

Jad Abumrad:
But surely you realize the math is the same.

Robert Krulwich:
You mean I'll save four people this way?

Jad Abumrad:
Yeah.

Robert Krulwich:
Yeah, but I'm ... This time I'm pushing the guy. Are you insane? No.

Jad Abumrad:
All right, here's the thing. If you ask people these questions, and we did, starting with the first.

Jad Abumrad:
"Is it okay to kill one man to save five using a lever?" 9 out of 10 people will say ...

Speaker 6:
Yes.

Speaker 7:
Yes (laughs).

Speaker 8:
Yes.

Speaker 9:
Yes.

Speaker 10:
Yeah.

Jad Abumrad:
But if you ask them, "Is it okay to kill one man to save five by pushing the guy?" 9 out of 10 people will say ...

Speaker 6:
No.

Speaker 7:
No.

Speaker 8:
Never.

Speaker 9:
No.

Speaker 10:
No.

Jad Abumrad:
It is practically universal. And the thing is, if you ask people, "Why is it okay to murder, because that's what it is, murder a man with a lever and not okay to do it with your hands, people don't really know."

Speaker 6:
Pulling the lever to save the five ... I don't know. That feels better than pushing the one to save the five. But I don't really know why, so that's a good ... There's a good moral quandary for you (laughs).

Robert Krulwich:
And if having a moral sense is a unique and special human quality then maybe we, we, us two humans anyway, you and me-

Jad Abumrad:
Yeah.

Robert Krulwich:
... should at least inquire as to why this happens. And I happen to have met somebody who has a hunch.

Robert Krulwich:
He's a young guy at Princeton University, wild curly hair, bit of mischief in his eye. His name is Josh Greene.

Josh Greene:
Alrighty.

Robert Krulwich:
And he spent the last few years trying to figure out where this inconsistency comes from.

Josh Greene:
How do people make this judgment? Forget whether or not these judgements are right or wrong. Just, what's going on in the brain that makes people distinguish so naturally and intuitively between these two cases, which from an actuarial point of view, are very, very, very similar if not identical?

Robert Krulwich:
Josh is, by the way, a philosopher and a neuroscientist, so this gives him special powers. He doesn't sort of sit back in a chair, smoke a pipe, and think, "Now why do you have these differences?"

Robert Krulwich:
He says, "No, I would like to look inside people's heads because in our heads we may find clues as to where these feelings of revulsion or acceptance come from." In our brains.

Josh Greene:
Uh, so we're here in the control room. We basically just see, is, uh, [crosstalk 00:08:30]

Robert Krulwich:
And, just so happens that in the basement of Princeton, there was this, um, well ...

Robert Krulwich:
A big circular thing.

Josh Greene:
Yeah, it looks kind of like an airplane engine.

Robert Krulwich:
180,000 pound brain scanner.

Josh Greene:
I'll tell you a funny story. You can't have any metal in there because of the magnet, so we have this long list of questions that we ask people to make sure they can go it. "Do you have a pacemaker? Have you ever worked with metal?" Blah, blah, blah, blah, blah...

Robert Krulwich:
"Have you ever worked with metal?"

Josh Greene:
Yeah, 'cause you could have little flecks of metal in your eyes that you would never even know are there from having done metalworking.

Speaker 12:
Oh my god, totally [crosstalk 00:09:02]

Josh Greene:
And one of the questions is whether or not you wear a wig or anything like that because they often have metal wires in with that.

Josh Greene:
And there's this very nice woman who does brain research here, who's Italian, and she's asking her subjects over the phone all, all these screening questions.

Josh Greene:
And so I have this person over to dinner. She says, "Yeah, you know, I ended up, uh, doing this study, but it asks you the weirdest questions. This woman's like, 'Do you have a hairpiece?' And (laughing), and I'm like, 'What does it have to do if I have herpes or not?'"

Josh Greene:
(laughing)

Josh Greene:
And I want to say [inaudible 00:09:27] ... Anyway, and she said, you know, she asked, "Do you have a hairpiece?" But she, uh, so now she asks people if you wear a wig or whatever. [crosstalk 00:09:35]

Robert Krulwich:
Anyhow, what Josh does is he invites people into this room, has them lie down on what is essentially a cot on rollers, and he rolls them into the machine. Their heads are braced. They're sort of stuck in there.

Robert Krulwich:
Have you ever done this?

Josh Greene:
Oh yeah, yep, several times.

Robert Krulwich:
And then he tells them stories. He tells them the same two, you know, trolley tales that you told before.

Jad Abumrad:
Mm-hmm (affirmative).

Robert Krulwich:
And then at the very instant that they're deciding whether I should push the lever, or whether I should push the man, at that instant, the scanner snaps pictures of their brains.

Robert Krulwich:
And what he found in those pictures was, frankly, a little startling. Uh, he showed us some.

Josh Greene:
I- I, al- all right, I'll show you some, some, some stuff.

Josh Greene:
Okay, let me think.

Robert Krulwich:
Th- The picture that I'm looking at is a sort of a, it's, it's a brain looked at, I guess, from the top-down.

Josh Greene:
Yep, it's top-down. It's sort of sliced, you know, like, like, like a deli slice or ...

Robert Krulwich:
And the first slide that he showed me was a human brain being asked the question would you pull the lever, and the answer in most cases was, "Yes."

Speaker 13:
Yeah, I'd pull the lever.

Robert Krulwich:
When the brain's saying, "Yes," you'd see little kind of peanut-shaped spots of yellow.

Josh Greene:
[crosstalk 00:10:38] ... this little guy right here and these two guys right there. [crosstalk 00:10:41]

Robert Krulwich:
The brain was being active in these places, and oddly enough whenever people said, "Yes."

Speaker 14:
Yes, yes.

Robert Krulwich:
... to the lever question, the very same pattern lit up.

Robert Krulwich:
Then he showed me another slide. This is a slide of a brain saying, "No."

Speaker 15:
No, I would not push the man.

Robert Krulwich:
"I will not push the large man." And in this picture ...

Josh Greene:
This one we're looking at here, this ...

Robert Krulwich:
It was a totally different constellation of regions that lit up.

Robert Krulwich:
This is the "No, no, no" crowd.

Josh Greene:
I think this is part of the "No, no, no" crowd.

Jad Abumrad:
So, when people answer, "Yes" to the lever question, there are, there are places in their brain which glow?

Robert Krulwich:
Right, but wh- when they answer, "No, I will not push the man," then you get a completely different part of the brain lighting up.

Jad Abumrad:
Even though the questions are basically the same?

Robert Krulwich:
Mm-hmm (affirmative).

Jad Abumrad:
What does that mean? What does Josh make of this?

Robert Krulwich:
Well he has a theory about this.

Josh Greene:
A theory, not proven, but I think ... Th- th- this is what I think the evidence suggests.

Robert Krulwich:
He suggests that the human brain doesn't hum along like one big, unified system. Instead he says, maybe in your brain, every brain, you find little, uh, warring tribes, little subgroups. One, that is sort of doing the logical sort of counting kind of thing.

Josh Greene:
You've got one part of the brain that says, "Huh, five lives versus one life? Wouldn't it better to save five versus one?"

Robert Krulwich:
And that's the part that would glow when you answer, "Yes, I'd pull the lever."

Speaker 16:
Yeah, I'd pull the lever.

Robert Krulwich:
But, there's this other part of the brain, which really, really doesn't like personally killing another human being and gets very upset at the fat man case, and shouts, in effect ...

Speaker 17:
No!

Speaker 18:
No!

Josh Greene:
It, it understands it on that level, and says ...

Speaker 17:
No!

Speaker 18:
No!

Josh Greene:
No, bad, don't do.

Speaker 19:
No, I don't think I could push ...

Speaker 20:
No.

Speaker 21:
Never.

Speaker 19:
... a person.

Speaker 22:
No.

Josh Greene:
Instead of having sort of one system that just sort of churns out the answer and bing, we have multiple systems that give different answers, and they duke it out. And hopefully out of that competition comes morality.

Robert Krulwich:
This is not a trivial discovery, that you struggle to find right and wrong depending on what part of your brain is shouting the loudest. This is ... It's like bleachers morality.

Jad Abumrad:
Do you buy this?

Robert Krulwich:
Hm. Uh, you know, I- I just don't know.

Jad Abumrad:
Yeah.

Robert Krulwich:
I've always kinda suspected that a sense of right and wrong is mostly stuff that you get from your mom and your dad and from experience, that it's culturally-learned for the most part.

Robert Krulwich:
Josh is kind of a radical in this respect. He thinks it's biological. I mean, deeply biological, that somehow we inherit from the deep past a sense of right and wrong that's already in our brains from the get-go, before Mom and Dad.

Josh Greene:
Our, our primate ancestors, before we were full-blown humans, had intensely social lives. They have social mechanisms that prevent them from doing all the nasty things that they might otherwise be interested in doing.

Josh Greene:
And so deep in our brain, we have what you might call basic primate morality. And basic primate morality doesn't understand things like tax evasion, but it does understand things like pushing your buddy off of a cliff.

Robert Krulwich:
Oh, so you're think then, if the man on the bridge, that I'm on the bridge next to the large man, and then I have hundreds of thousands of years of training in my brain that says, "Don't murder the large man."

Josh Greene:
Right. Whereas-

Robert Krulwich:
Even if I'm thinking, "If I murder the large man, I'm gonna save five lives and only kill the one man," but there's something deeper down that says, "Don't murder the large man."

Josh Greene:
Right.

Josh Greene:
Now that case, I think it's a pretty easy case. Even though it's five versus one, in that case, people just go with what we might call the "inner chimp." But there are other, but there-

Robert Krulwich:
The "inner chimp" is your unfortunate way of describing an act of, an act of deep goodness.

Josh Greene:
R- R- Right, well that's what's interesting.

Robert Krulwich:
"Thou shalt ..." Let's have 10 Commandments for God ... "Inner chimp ..."

Josh Greene:
Right, well ... Right, Well, what's interesting is that we think of, of basic human morality as being handed down from on high, and it's probably better to say that it was handed up from below, that our most basic core moral values are not the things that we humans have invented, but the things that we've actually inherited from other people. The stuff that we humans have invented are the things that seem more peripheral and variable. But-

Robert Krulwich:
Something as basic as, "Thou shalt not kill," which many people think was handed down in tablet form from a mountaintop from God directly to humans, no chimps, involved ...

Josh Greene:
Right.

Robert Krulwich:
You're suggesting that hundreds of thousands of years of on-the-ground training have gotten our brains to think, "Don't kill your kin. Don't kill your-"

Josh Greene:
Right, or at least, you know, that should be your default response. I mean, certainly chimpanzees are extremely violent, and they do kill each other, but they don't do it as a matter of, of course. They, so to speak, have to have some context-sensitive reason for doing so. Uh-

Robert Krulwich:
So now we're getting to the rub of it. You think that profound moral positions may be somehow embedded in brain chemistry.

Josh Greene:
Um, yeah.

Jad Abumrad:
And Josh thinks, uh, there are times when these different moral positions that we have embedded inside of us, in our brains, when they can come into conflict. And in the original episode, we went into one more story. This one, you might call the "crying baby dilemma."

Jad Abumrad:
Th- The situation, uh, is somewhat similar to th- the last episode of M*A*S*H for people who are familiar with that. But the way we tell the story, it goes like this.

Jad Abumrad:
It's wartime...

Speaker 23:
There's an enemy patrol coming down the road.

Jad Abumrad:
You're hiding in the basement with some of your fellow villagers.

Speaker 23:
Let's kill those lights.

Jad Abumrad:
And the enemy soldiers are outside. They have orders to kill anyone that they find.

Speaker 23:
Quiet! Nobody make a sound until they've passed us.

Robert Krulwich:
So there you are. You're huddled in the basement, all around you are enemy troops, and you're holding your baby in your arms, your baby with a cold, a bit of a sniffle. And you know that your baby could cough at any moment.

Jad Abumrad:
If they hear your baby, they're going to find you and the baby and everyone else, and they're going to kill everybody. And the only way you can stop this from happening is cover the baby's mouth.

Jad Abumrad:
But if you do that, the baby's going to smother and die. If you don't cover the baby's mouth, the soldiers are gonna find everybody and everybody's gonna be killed, including you, including your baby.

Robert Krulwich:
And you have the choice. Would you smother your own baby to save the village, or would you let your baby cough, knowing the consequences?

Jad Abumrad:
And this is a very tough question. People take a long time to think about it, and some people say yes, and some people say no.

Speaker 24:
Children are a blessing and a gift from God, and we do not do that to children.

Speaker 25:
Yes, I think I would kill my baby to save everyone else and myself.

Speaker 26:
No, I would not kill the baby.

Speaker 27:
I feel because it's my baby, I have the right to terminate the life. Um ... so, yeah.

Speaker 28:
I'd like to say that I would kill the baby, but I don't know if I'd have the inner strength.

Speaker 29:
No. If it comes down to killing my own child, my own daughter or my own son, then I choose death.

Speaker 30:
Yeah. If you have to, 'cause it was done in World War II. When the Germans were coming around, there was a mother that had a baby that was crying, and rather than be found, she actually suffocated the baby, but the other people lived.

Speaker 31:
Sounds like an old M*A*S*H thing. No, you do not kill your baby.

Robert Krulwich:
In the final M*A*S*H episode, the Korean woman who's a character in this piece, she murders her baby.

Speaker 32:
She killed it. She killed it. Oh my god, oh my god. I didn't mean for her to kill it. I did not ... I- I just wanted it to be quiet. It was, it was a baby. She, she smothered her own baby.

Robert Krulwich:
What Josh did is he asked people the question, "Would you murder your own child?" while they were in the brain scanner. And at just the moment when they were trying to decide what they would do, he took pictures of their brain.

Robert Krulwich:
And what he saw, the contest we described before, was global in the brain. It was like a world war.

Robert Krulwich:
That gang of accountants, that part of the brain was busy calculating, calculating. The whole village could die, the whole village could die.

Robert Krulwich:
But the older and deeper reflex also was lit up, shouting, "Don't kill the baby! No, no! Don't kill the baby!"

Speaker 17:
No!

Speaker 18:
No!

Robert Krulwich:
Inside, the brain was literally divided. Do the calculations, don't kill the baby. Do the calculations, don't kill the baby.

Robert Krulwich:
Two different tribes in the brain literally trying to shout each other out.

Robert Krulwich:
And, Jad, this was a different kind of contest than the one's we talked about before. Remember before, when people were pushing the man off the bridge, overwhelmingly their brains yelled, "No, no, don't push the man!"

Robert Krulwich:
And when people were pulling the lever, overwhelmingly, "Yeah, yeah, pull the lever!"

Jad Abumrad:
Right.

Robert Krulwich:
There was distinct. Here, I don't think really anybody wins.

Jad Abumrad:
Well, who breaks the tie? I mean, they had to answer something, right?

Robert Krulwich:
(laughs) Well, that's a good question.

Robert Krulwich:
And now, is there a... Do... What happens? Is it just two cries that fight each other out or is there a judge?

Josh Greene:
Well, that's an interesting question. And that's one of the things that we're looking at.

Robert Krulwich:
When you are in this moment, with parts of your brain contesting, there are two brain regions ...

Josh Greene:
These two areas here, towards the front ...

Robert Krulwich:
... right behind your eyebrows, left and right, that light up. And this is particular to us. He showed me a slide.

Josh Greene:
Uh, it's those, uh, sort of area that are very highly developed in humans as compared to other species.

Robert Krulwich:
So, when we have a problem that we need to deliberate over, the light ... The front of the brain, this is above my eyebrow, sort of?

Josh Greene:
Yeah, right about there.

Robert Krulwich:
And there's two of them, one on the left, one on the right.

Josh Greene:
Bilateral.

Robert Krulwich:
And they are the things that monkeys don't have as much of that we have?

Josh Greene:
Certainly these parts of the brain are more highly developed in humans.

Robert Krulwich:
So looking at these two flashes of light at the front of a human brain, you could say we are looking at what makes us special.

Josh Greene:
Th- That's a fair statement.

Robert Krulwich:
A human being wrestling with a problem, that's what that is.

Josh Greene:
Yeah, where it's both emotional, but there's also a sort of a rational attempt to sort of sort through those emotions, those are the cases that are showing more activity in that area.

Jad Abumrad:
So in those cases when these dots above our eyebrows become active, what are they doing?

Robert Krulwich:
Well, he doesn't know for sure, but what he found is in this close contexts, whenever those nodes are very, very active, it appears that the calculating section of the brain gets a bit of a boost, and the visceral "inner chimp" section of the brain is kind of muffled.

Speaker 18:
No! No. No ...

Robert Krulwich:
The people who chose to kill their children, who made what is essentially a logical decision, over and over, those subjects had brighter glows in these two areas and longer glows in these two areas, so there is a definite association between these two dots above the eyebrow and the power of the logical brain over the "inner chimp" or the visceral brain.

Josh Greene:
Well, you know, that's the hypothesis. Well, it's gonna take a lot of more research to sort of tease apart what these different parts of the brain are doing or if some of these are just sort of activated in an incidental kind of way. I mean, we really don't know. This is all, all very new.

Jad Abumrad:
Okay, so that was the story we put together many, many, many years ago, about a decade ago. Uh, and at that point, the whole idea of thinking of morality as kind of, purely a brain thing, it was relatively new. And, certainly, the idea of philosophers working with fMRI machines, it was super new.

Jad Abumrad:
But now, here we are, 10 years later, and, uh, some updates. Uh, first of all, Josh Greene ...

Robert Krulwich:
So in the, in the long, long stream of time, I assume now, you have, uh, three giraffes, two bobcats, and children?

Josh Greene:
That's right. Yeah, so two kids and we- we're close to adding a cat. [crosstalk 00:22:54]

Jad Abumrad:
We talked to him again. He has started a family. He's switched labs from Princeton to Harvard.

Jad Abumrad:
But that whole time, that interim decade, he has still been thinking and working on the trolley problem.

Robert Krulwich:
Did you ever write the story differently?

Josh Greene:
Absolutely, so [crosstalk 00:23:08]

Jad Abumrad:
For years, he's been trying out different permutations of the scenario on people.

Josh Greene:
... by fire, push one of your colleagues in ...

Jad Abumrad:
Like, "Okay, instead of pushing the guy off the bridge with your hands, what if you did it, but not with your hands?"

Josh Greene:
So in one version, we ask people about hitting a switch that opens a trapdoor on the footbridge and drops the person. In one version of that, the switch is right next to the person. In another, the switch is far away. And in yet another version, you're right next to the person and you don't push them off with your hands, but you push them with a pole.

Robert Krulwich:
Oh ...

Josh Greene:
[crosstalk 00:23:38]

Jad Abumrad:
And to cut to the chase, uh, what Josh has found is that the basic results that we talked about ...

Josh Greene:
That's roughly held up. [crosstalk 00:23:46]

Jad Abumrad:
Still the case that people would like to save the most number of lives, but not if it means pushing somebody with their own hands or with a pole, for that matter.

Jad Abumrad:
Now here's something kind of interesting. Uh, he and [Alyssa 00:23:57] found that there are two groups that are more willing to push the guy off the bridge. They are Buddhist monks and psychopaths.

Josh Greene:
I mean, some people just don't care very much about hurting other people. They don't have that kind of an emotional response. [crosstalk 00:24:12]

Jad Abumrad:
That would the psychopaths, whereas the Buddhist monks presumably are really good at shushing their "inner chimp," as he called to it, and just saying to themselves ...

Josh Greene:
You know, I'm aware that this is ... that killing somebody is a terrible thing to do, and I feel that, but I recognize that this is done for a noble reason, and therefore, it's, it's, it's okay.

Jad Abumrad:
So there's all kinds of interesting things you can say about the trolley problem as a thought experiment, but at the end of the day, it's just that. It's a thought experiment. What got us interested in revisiting it is that it seems like the thought experiment is about to get real.

Jad Abumrad:
That's coming up right after the break.

Amanda Darby:
This is Amanda Darby, calling from Rockville, Maryland. Radiolab is supported in part by the Alfred P. Sloan Foundation, enhancing public understanding of science and technology in the modern world. More information about Sloan at www.sloan.org.

Ilya Marritz:
Hello, it's Ilyla Marritz, co-host of Trump, Inc. Donald Trump is the only recent president to not release his tax returns, the only president you can pay directly by booking a room at his hotel. He shreds rules, sometimes literally.

Speaker 36:
He didn't care what reckless was. He tore up memos or things and just threw them in the trash. So it took somebody from the White House staff to tell him like, "Look, you can't do that."

Ilya Marritz:
Trump, Inc., an open investigation into the business of Trump from ProPublica and WNYC. Subscribe wherever you get your podcasts.

Jad Abumrad:
Jad, Robert, Radiolab. Okay, so where we left it is that the trolley problem is about to get real. Here's how Josh Greene put it.

Josh Greene:
You know, now as we're entering the age of self-driving cars, ah, this is like the trolley problem now finally come to life.

Speaker 37:
Oh, there's cars coming! Oh-

Speaker 38:
The future of the automobile is here.

Speaker 37:
Oh, there's cars! Ah-

Speaker 38:
Autonomous vehicles. It's here.

Speaker 37:
[crosstalk 00:26:21]

Speaker 39:
The first self-driving Volvo will be offered to customers in 2021.

Speaker 37:
... never. Ah, ah! Oh, where's it going [crosstalk 00:26:31]

Speaker 40:
This legislation is the first of its kind, focused on the car of the future that is more of a supercomputer on wheels.

Speaker 37:
Oh! Oh, there's a car coming! [crosstalk 00:26:39]

Jad Abumrad:
Okay, so self-driving cars, unless you've been living under a muffler, they are coming. It's going to be a little bit of an adjustment for some of us.

Speaker 37:
Ah!

Speaker 41:
Hit the brakes, hit the brakes.

Speaker 37:
No [crosstalk 00:26:49]

Jad Abumrad:
But what Josh meant when he said it's the trolley problem ...

Josh Greene:
... come to life ...

Jad Abumrad:
... is basically this.

Jad Abumrad:
Imagine this scenario.

Josh Greene:
The self-driving car now is, uh, headed towards a bunch of pedestrians in the road. The only way to save them is to swerve out of the way, but that will run the car into a concrete wall and it will kill the passenger in the car.

Josh Greene:
Uh, what should the car do? Should the car go straight and run over, say, those five people, or should it swerve and, and, and, and kill the one person?

Jad Abumrad:
That suddenly is a real-world question.

Josh Greene:
If you ask people in the abstract...

Jad Abumrad:
... like what, theoretically, should a car in this situation do ...

Josh Greene:
They're much more likely to say ...

Speaker 42:
I think you should sacrifice one for the good of the many.

Josh Greene:
They should just try to the most good or avoid the most harm.

Jad Abumrad:
So if it's between one driver and five pedestrians ...

Speaker 43:
Logically, it would be the driver.

Speaker 44:
Kill the, um, driver.

Speaker 45:
Be selfless.

Speaker 46:
I think it should kill the driver.

Jad Abumrad:
But when you ask people, forget the theory ...

Josh Greene:
Would you want to drive in a car that would potentially sacrifice you to save the lives of more people in order to minimize the total amount of harm? They say ...

Speaker 47:
No.

Speaker 48:
I wouldn't buy it.

Speaker 49:
No.

Speaker 50:
Absolutely not.

Speaker 51:
That would kill me in it? No.

Speaker 52:
So I'm not gonna, I'm not gonna buy a car that's gonna purposely kill me.

Speaker 53:
Hell no. I wouldn't buy it.

Speaker 54:
(laughs) For sure, no (laughs).

Speaker 53:
I'd sell it, but I wouldn't buy it.

Speaker 72:
Thank you very much. Have a good evening.

Jad Abumrad:
So there's your problem. People would sell a car, and an idea of moral reasoning, that they themselves wouldn't buy. And last fall, an exec at Mercedes Benz face-planted right into the middle of this contradiction.

Speaker 55:
Welcome to Paris, one of the most beautiful cities in the world, and welcome to the 2016 Paris Motor Show, home to some of the most beautiful cars in the world. [crosstalk 00:28:42]

Jad Abumrad:
Okay, October 2016, the Paris Motor Show. You had something like a million people coming in over the course of a few days. All the major car-makers were there.

Speaker 56:
Here is Ferrari. You can see the LaFerrari Aperta, and of course the new GT4CLusso T. [crosstalk 00:28:58]

Jad Abumrad:
Everybody was debuting their new cars. And, uh, one of the big presenters in this whole affair was this guy.

Christoph von H:
In the future, you'll have cars where you don't even have to have your hands on the steering wheel anymore, but maybe you watch a movie on the head-up display or maybe you want to do your emails. That's really what we are striving for.

Jad Abumrad:
This is Christoph von Hugo, a senior safety manager at Mercedes Benz. He was at the show, sort of demonstrating a prototype of a car that could sort of self-drive its way through traffic.

Christoph von H:
In this E-Class today, for example, you've a maximum of comfort and support systems.

Speaker 58:
You'll actually look forward to being stuck in traffic jams, won't you?

Christoph von H:
Of course, of course. [crosstalk 00:29:33]

Jad Abumrad:
He was doing dozens and dozens of interviews through the show, and in one of those interviews ... Unfortunately, this one we don't have on tape ... He was asked, "What would your driverless car do in a trolley problem-type dilemma, where maybe you have to choose between one or many?"

Jad Abumrad:
And he answered, quote ...

Michael Taylor:
If you know you can save one person, at least save that one.

Jad Abumrad:
If you know you can save one person, save that one person.

Michael Taylor:
Save the one in the car.

Jad Abumrad:
This is Michael Taylor, correspondent for Car and Driver magazine. He was the one that Christoph von Hugo said that to.

Michael Taylor:
If you know for sure that one thing, one life can be prevented, and that's your first priority.

Amanda Aronczyk:
Now when he said this to you ...

Jad Abumrad:
This is producer Amanda Aronczyk.

Amanda Aronczyk:
... Did it seem controversial at all in the moment?

Michael Taylor:
In the moment, it seemed incredibly logical.

Jad Abumrad:
I mean, all he's really doing is saying what's on people's minds, which is that ...

Speaker 47:
No.

Speaker 48:
I- I wouldn't buy it personally [crosstalk 00:30:29]

Jad Abumrad:
Who's gonna buy a car that chooses somebody else over them?

Jad Abumrad:
Anyhow, he makes that comment, Michael prints it, and a kerfuffle ensues.

Speaker 61:
"Save the one in the car." That's Christoph von Hugo from Mercedes ...

Speaker 62:
But then when you lay out the questions, you sound like a bit of a heel because you want to save yourself as opposed to the pedestrians.

Speaker 63:
Doesn't it ring, though, of, like, just privilege, you know?

Speaker 64:
It does, yeah, it does.

Speaker 65:
(laughing) [inaudible 00:30:55], wait a second. What would you do? It's you or a pedestrian. And it's just you know, I don't know anything about this pedestrian. It's just you or a pedestrian, just a regular guy walking down the street.

Speaker 66:
Ah, screw everyone who's not in a Mercedes.

Josh Greene:
And there was this kind of uproar about that, uh, how dare you drive these selfish, you know, make these selfish cars? Uh, and then he walked it back, and he said, "No, no, what I mean is that, uh, just, that we, th- that we have a better chance of protecting the people in the car so we're going to protect them because they're easier to protect."

Josh Greene:
But of course, you know, there's always gonna be trade-offs.

Robert Krulwich:
Yeah.

Jad Abumrad:
And those trade-offs could get really, really tricky and subtle. Because obviously these cars have sensors.

Raj Rajkumar:
Sensors like, uh, cameras, radars, lasers, and ultrasound sensors.

Jad Abumrad:
This is Raj Rajkumar. He's a professor at Carnegie Mellon.

Raj Rajkumar:
I'm the, uh, co-director of the GM-CMU Connected and Autonomous Driving Collaborative Research Lab.

Jad Abumrad:
He is one of the guys that is writing the code that will go inside GM's, uh, driverless car. He says, "Yeah, the sensors at the moment on these cars ..."

Raj Rajkumar:
Still evolving ...

Jad Abumrad:
Pretty basic.

Raj Rajkumar:
We are very happy if today, it can actually detect a pedestrian, can detect a bicyclist, a motorcyclist. Different makers have different shapes, sizes, and colors.

Jad Abumrad:
But he says, it won't be long before ...

Raj Rajkumar:
You can actually know a lot more about, uh, who these people are [crosstalk 00:32:15]

Jad Abumrad:
Eventually they will be able to detect people of different sizes, shapes, and colors. Like, oh, that's a skinny person, that's a small person, tall person, black person, white person. That's a little boy, that's a little girl.

Jad Abumrad:
So forget the basic moral math. Like, what does a car do if it has to decide, oh, do I save this boy or this girl? What about two girls versus one boy and an adult? How about a cat versus a dog? A 75-year-old guy in a suit versus that person over there who might be homeless? You can see where this is going.

Jad Abumrad:
And it's conceivable that cars will know our medical records, and back at the car show ...

Speaker 68:
We've also heard that term, "car-to-car communication."

Christoph von H:
Well, that's, uh, also one of the enabling technologies in highly-, uh, automated driving. [crosstalk 00:32:54]

Jad Abumrad:
Mercedes guy basically said, in a couple of years, the cars will be networked. They'll be talking to each other. So just imagine a scenario where, like, cars are about to get into accidents and right at the decision point, they're, like, conferring.

Jad Abumrad:
"Well, who do you have in your car?"

Jad Abumrad:
"Me, I got a 70-year-old Wall Street guy, makes eight figures. How about you?"

Jad Abumrad:
"Well, I'm a bus full of kids. Kids have more years left. You need to move."

Jad Abumrad:
"Well, hold up. I see that your kids come from a poor neighborhood and have asthma so I don't know ..."

Raj Rajkumar:
So, you can basically, uh, tie yourself up in, uh, knots, uh, wrap yourself around an axle. We do not think that any programmer should be given this, uh, major, uh, burden of deciding who survives and who gets killed.

Raj Rajkumar:
I think these are, uh, very fundamental, deep, uh, issues that society has to decide at large. I don't think a programmer, uh, eating pizza and sipping Coke should be making the call.

Jad Abumrad:
(laughs) How does society decide? I mean, help me imagine that.

Raj Rajkumar:
I think it really has to be an evolutionary process, I believe.

Jad Abumrad:
Raj told us that two things basically need to happen. First, we need to get these robocars on the road, get more experience with how they interact with us human drivers and how we interact with them. And two, there need to be like, industry-wide summits.

Bill Ford Jr.:
No one company is going to solve that. [crosstalk 00:34:06]

Jad Abumrad:
This is Bill Ford Jr. of the Ford company, uh, giving a speech in October of 2016 at the Economic Club of D.C.

Bill Ford Jr.:
And we have to have ... Because could you imagine if we had one algorithm and Toyota had another and General Motors had another? I mean, it would be ... I mean, obviously you couldn't do that. [crosstalk 00:34:21]

Jad Abumrad:
'Cause like, what if the Tibetan cars make one decision and the American cars make another?

Bill Ford Jr.:
So, we need to have a national discussion on ethics, I think, uh, because we've never had to think of these things before, but the cars will have the time and the ability to do that.

Speaker 70:
[foreign language 00:34:36] [crosstalk 00:34:40]

Jad Abumrad:
So far, Germany is the only country that we know of that has tackled this head-on.

Speaker 70:
[German 00:34:46]

Speaker 71:
One of the most significant points the ethics commission made is that autonomous and connected driving is an ethical imperative.

Jad Abumrad:
They ... The government has released a code of ethics that says, among other things, self-driving cars are forbidden to discriminate between humans in almost any way. Not on race, not on gender, not on age, nothing.

Speaker 71:
[crosstalk 00:35:07] ... says priorities like these shouldn't be programmed into the cars. [crosstalk 00:35:09]

Raj Rajkumar:
One can imagine, uh, a few clauses being added, uh, in the Geneva Convention, if you will, of what these automated vehicles should do. A globally-accepted standard, if you will.

Jad Abumrad:
How we get there, to that globally-accepted standard, is anyone's guess. And what it will look like, whether it will be, like, a coherent set of rules or, like, rife with the kind of contradictions we see in our own brain, that also remains to be seen. But one thing is clear.

Speaker 37:
Oh, there's cars coming! Oh, oh there's cars! Ah, [crosstalk 00:35:42]

Jad Abumrad:
Oh, there are cars coming.

Speaker 37:
Feel this-

Jad Abumrad:
With their questions.

Speaker 37:
[inaudible 00:35:48] control it. Oh, dear Jesus, I could never, ah! Ah! Oh, where's it going! Goddamn, Bill. Oh my god. [crosstalk 00:36:00]

Jad Abumrad:
Okay, we do need to caveat all this by saying that the moral dilemma we're talking about in the case of these driverless cars is gonna be super rare.

Jad Abumrad:
Mostly what will probably happen is that, like, the planeloads of people that die every day from car accidents, well that's just gonna hit the floor. And so you have to balance the few cases where a car might make a decision you don't like against the massive number of lives saved.

Robert Krulwich:
I was thinking actually of a different thing. I was thinking even though you dramatically bring down the number of bad things that happen on road ... You dramatically bring down the collisions, you dramatically bring down the mortality, you dramatically lower the number of people who are drunk coming home from a party and just ram someone sideways and killing three of them and injuring two of them for the rest of their lives.

Robert Krulwich:
Those kinds of things go way down, but th- the ones that remain are engineered. Like, they are calculated, uh, almost with foresight.

Jad Abumrad:
Mm-hmm (affirmative).

Robert Krulwich:
So here's the difference. And this is just an interesting difference, like, "Ah, damn, that's so sad that happened. That that guy got drunk and dadada, and maybe he should go to jail."

Robert Krulwich:
But, "You mean that the society engineered this in?"

Jad Abumrad:
(laughs)

Robert Krulwich:
That is a big difference. One is operatic and seems like the forces of destiny, and the other seems mechanical and pre-thought through.

Jad Abumrad:
Premeditated, yeah.

Robert Krulwich:
And there's something dark about a premeditated expected death. And I don't know what you do about that.

Jad Abumrad:
Well, yeah, but in-

Robert Krulwich:
Everybody's on the hook for that.

Jad Abumrad:
In the particulars, in the particulars it feels dark. It's a little bit like when, you know, should you kill your own baby to save the village.

Robert Krulwich:
Okay.

Jad Abumrad:
Like, in the particular of that one child it's dark. But against the backdrop of the lives saved, it's just a tiny pinprick of darkness. That's all it is.

Robert Krulwich:
Yeah, but you know how humans are. If you argue back that yes, a bunch of smartypantses concocted a mathematical formula which meant that some people had to die and here they are, there are many fewer than before! A human being, just like Josh would tell you, would have a roar of feeling and of anger and saying, "How dare you engineer this in! No, no, no, no, no!"

Jad Abumrad:
And that human being needs to meditate like the monks to silence that feeling because the feeling in that case is just getting in the way!

Robert Krulwich:
Yes and no. And that may be impossible unless you're a monk, for God's sake.

Jad Abumrad:
See, we're right back where we started now. All right, we should go.

Robert Krulwich:
Jad, you have to thank some people, no?

Jad Abumrad:
Yes, oh, uh, this piece was produced by Amanda Aronczyk, with help from Bethel Habte. Special thanks to [Iad Rowan 00:38:39], Edmond Awad, and Sydney Levine from The Moral Machine Group, MIT. Also thanks to Sertac Karaman, Xin Xiang, and Roborace for all their help.

Jad Abumrad:
And I guess we should go now.

Robert Krulwich:
Yeah. I'll um-

Jad Abumrad:
I'm Jad Abumrad.

Robert Krulwich:
I'm not getting into your car.

Jad Abumrad:
(laughs)

Robert Krulwich:
If you don't mind. Just take my own.

Jad Abumrad:
I'm going to rig up an autonomous vehicle to the bottom of your bed.

Jad Abumrad:
(laughing)

Jad Abumrad:
So you're going to go to bed and suddenly find yourself on the highway driving you wherever I want.

Robert Krulwich:
(laughs) No you won't.

Jad Abumrad:
Anyhow, uh, okay, we should go.

Robert Krulwich:
Yeah.

Jad Abumrad:
I'm Jad Abumrad.

Robert Krulwich:
I'm Robert Krulwich.

Jad Abumrad:
Thanks for listening.

Speaker 73:
Received today at 2:41 PM.

Josh Greene:
All right. This is Josh Greene, giving you your credit.

Speaker 74:
Hi, this is Michael Taylor for Amanda.

Josh Greene:
Uh, here we go. Radiolab was created by Jad Abumrad and is produced by Soren Wheeler.

Speaker 74:
Dylan Keefe is our director of Sound Design.

Josh Greene:
Our staff includes Simon Adler, Rachel Cusick, David Gebel ...

Speaker 74:
... [inaudible 00:39:57], Tracie Hunte, Matt Kielty, Robert Krulwich ...

Josh Greene:
... Bethel Habte ... Sorry, I'm in an airport, and we've got the, uh, the overhead announcement. All right, I'll keep going.

Speaker 74:
... Annie McEwen, Latif Nasser, Malissa O'Donnell, Arianne Wack, and Molly Webster ...

Josh Greene:
... with help from Amanda Aronczyc, Shima Oliaee, David [Fox 00:40:18], [Nija Fatalie 00:40:19], uh, [Niar Fatalie 00:40:22] C.B. Wang, and Katie Ferguson.

Speaker 74:
Our fact checker is Michelle Harris.

Josh Greene:
I hope that does it for you. Let me know if you want me to do it again. Thanks guys. I'm looking forward to hearing the show. Bye.

Speaker 73:
End of message.

 

Copyright © 2019 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.

New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.