
May 17, 2018
Transcript
Jad Abumrad:
Wait. wait, you're listening...
Speaker 2:
Okay.
Jad Abumrad:
All right.
Speaker 2:
Okay.
Jad Abumrad:
All right.
Speaker 2:
You're listening...
Jad Abumrad:
Listening...
Speaker 2:
To Radiolab.
Jad Abumrad:
Radio...
Speaker 2:
From...
Jad Abumrad:
WNY...
Speaker 2:
C...c.
Jad Abumrad:
Yeah. I'm Jad.
Robert:
I'm Robert.
Speaker 2:
Um, are you guys ready to do this? Maybe we should just do this?
Jad Abumrad:
This is Radiolab.
Speaker 2:
All right, but when your hosts come out, I need you to seriously clap like you've never seen two dudes with glasses talking on a microphone okay?
Crowd:
(laughs)
Speaker 2:
Okay, so like, just really, really give it up for your mostly human hosts, Jad Abumrad and Robert Krulwich.
Robert:
So, about a week ago we gathered, I guess, roughly a hundred people...
Speaker 2:
Hello.
Robert:
... into a performance space, which is in our building here at WNYC. It's called the Greenspace.
Jad Abumrad:
Yeah.
Robert:
This is like a, like a playground for us so we can just try things.
Jad Abumrad:
We decided to gather all these people into a room on a random Monday night. What else are you doing on a Monday right, because seven years previous we had made a show called talking to machines, which was all about, like, what happens when you talk to a computer that's pretending to be human.
Robert:
Right.
Jad Abumrad:
And, the thing is, so much has happened since we made that show, with the proliferation of bots on Twitter. Russian bots meddling in elections. The advances in AI. So much interesting stuff had happened that we thought, it is time to update that show.
Robert:
And, we needed to do it live we thought, because we had a little plan in mind. We wanted to put unsuspecting people into a room for a kind of showdown between people and machines. But we, we want to set the scene a little bit and give you, uh, just a flavor of what we're really gonna be... [crosstalk 00:01:39]
Robert:
Just to start things off, we brought to our stage one of the guys who inspired that original show.
Jad Abumrad:
Please welcome to the stage writer Brian Christian.
Robert:
So, so, just so we can just, just get things sort of oriented, uh, we need to first of a ll just redefine what a chatbot is.
Brian Christian:
Right. So, A chatbot is a computer program, uh, that exists to mimic and impersonate human beings.
Robert:
Like, when do I run into them?
Brian Christian:
You go to a website to interact with some customer service. You might find yourself talking to a chatbot. Um, the US Army has a chatbot called Sgt. Star that recruits people.
Jad Abumrad:
Now, can I ask you a question about the, the s- the thing you just said about chatting with customer service?
Brian Christian:
Yeah.
Jad Abumrad:
Which I, I end up doing a lot.
Crowd:
(laughs)
Jad Abumrad:
Um, I'm sorry. Which is I, uh, you know, like, it's the middle of the night, you're trying to figure out some program and it's not working, and then suddenly there's like, "Need to chat?" And you click on that.
Robert:
What do you mean suddenly there's need to chat?
Jad Abumrad:
Well, it's like you, you're, whatever.
Crowd:
(laughs)
Robert:
Okay.
Jad Abumrad:
I assume many of you have had this experience, uh.[crosstalk 00:02:46]
Robert:
I've had very few of the experiences that he's had, so there's just the, that issue always.
Jad Abumrad:
I'm always curious it, it, what... it seems very human when you're having that, that conversation with a, with a customer service chat bot. Is there a, is there a place where it... Where is the line between human and robot. Seeing that they're both present?
Brian Christian:
Yeah. Well, this, this is the question right? So, we're now sort of accustomed to having this uncanny feeling of not really knowing the difference. My guess for what it's worth is that there's a system on the back end that's designed to sort of do triage, where, the first few exchanges that are just like, hey, how can I help? What's going on, it seems like there's and issue with the such and such. Um, that is basically just a chatbot, and at a certain point, you kind of seamlessly transition and are handed off to a real person.
Jad Abumrad:
Mm-hmm (affirmative).
Brian Christian:
But, without any, you know, notification to you that this has happened. It's deliberately left opaque at what point that happens.
Jad Abumrad:
Wow.
Robert:
And, this is literally everywhere.
Brian Christian:
It is, and I mean, and you can't get on social media and read some comment thread without someone accusing someone else of being a bot.
Robert:
(laughs)
Brian Christian:
Um, and, you know, it seems, uh, it seems maybe sort of trivial at some level, but, we are now living through this political crisis of. how do we kind of come to terms with the idea that we can, you know, weaponize this kind of speech, and, how do we as consumers of the news or as users of social media try to suss out whether the people we're interacting with are in fact who they say they are.
Robert:
I'd just like...[crosstalk 00:04:21] And all this confusion about what's the machine and who's the human, it can get very interesting. In the context of a famous thought experiment named for a great mathematician named Alan Turing, Brian told us about this, it's called the Turing Test.
Brian Christian:
Alan Turing, he makes this famous prediction back in 1950 that we'll eventually get to a point sometime around the beginning of this century where we'll stop being able to tell the difference.
Jad Abumrad:
Well, what specifically was his, sort of, prophecy?
Brian Christian:
His specific prophecy was that, by the year 2000, uh, after five minutes of interacting by text message with a human on one hand and a chatbot on the other hand, uh, 30% of judges would fail to tell which was the human and which was the robot.
Robert:
Is 30 just like a soft kind of-
Brian Christian:
30 is just what Turing imagined, and, he predicted that as a result of hitting this 30% threshold, we would reach a point, he writes, where, one would speak of machines as being intelligent without expecting to be contradicted. Um, and this just existed as kind of a, part of the philosophy of computer science until the early 1990s when, into the story steps Hugh Loebner, a rogue multi-millionaire disco dance floor salesman.
Robert:
(laughs)
Crowd:
(laughs)
Robert:
A what?
Brian Christian:
(laughs) a rogue millionaire, plastic portable light up disco dance floor salesman, he like-[crosstalk 00:05:47]
Robert:
You mean like the Bee Gees kind of s-
Brian Christian:
Yeah.
Jad Abumrad:
Wow.
Robert:
The, the lighting, the floor that lights up?
Brian Christian:
Yeah. But portable. (laughs)
Crowd:
(laughs)
Robert:
But portable. You can make a... You can be a rogue millionaire from that?
Brian Christian:
There's apparently millions to be made if, if, if only, if only you knew. Um, and, um, Hugh Loebner, this eccentric millionaire, uh, decides that we- this was in a- about 1992, that the technology was starting to get to the point where it would be worth, not just talking about the Turing Test as this though experiment, but, actually convening a group of people in a room once a year to actually run the test.
Jad Abumrad:
Now, a bit of background. During the Loebner competitions, the actual Loebner competitions, how it usually works is that you've got some participants. These are the people who have to decide what's going on. They sit at computers, and they stare at the computer, and they chat with someone on a screen. Now, they don't know id someone they're chatting with is a person or a bot. Behind a curtain, you have that bot, a computer running the bot, and you also have some humans, who the participants may or may not be chatting with. They've gotta decide right? Are they chatting with a person or a machine?
Jad Abumrad:
Now, Brian, many, many years ago actually participated in this competition. He was one of the humans behind the curtain that was chatting with the participants. And, when we talked to him initial many years ago, uh, for the Talking To Machines show, we went into all the strategies that the computer programs h- where using that year to try and fool the participants, but the takeaway was that, the year that he did it, the computers flopped. By and large, the participants where not fooled. They knew exactly when they were talking to a human, and when they were talking to a machine. And that was a while ago. In the Greenspace, we asked Brian, where do things stand now?
Robert:
Has it, like, when we last talked to you, what, when did we last, when was it? 2011?
Brian Christian:
2011.
Robert:
2011.
Jad Abumrad:
- Ha- has it, have we passed the 30% threshold si- in the intervening eight years?
Brian Christian:
So, in 2014, there was a Turing Test competition that was held at which the top computer program managed to fool 30% of the judges.
Jad Abumrad:
Wow.
Brian Christian:
And, so...
Robert:
That's it right?
Brian Christian:
Depending on how you want to interpret that result, the controversy arose in this particular year, because the chatbot that won was claiming to be a 13 year old Ukrainian, who was just beginning to get a grasp on the English language.
Jad Abumrad:
Oh.
Robert:
Oh, so the machine was cheating.
Brian Christian:
Right.
Jad Abumrad:
Well, that's interesting, so it masked it's computerness by, in broken grammar.
Brian Christian:
Yeah, exactly, right, or it, if it didn't appear to understand your question you started to have this story you could play in your own mind of, oh, well maybe I didn't phrase that quite right or something.
Robert:
Has it been broke- has the, has it, there been a second winner, or a third winner, or a fourth winner or...
Brian Christian:
Um, to the best of my knowledge, we are still sort of flirting under that threshold.
Robert:
Well, since we haven't had any victories since 2014, w- we thought we might just do this right here.
Robert:
(singing)
Robert:
Just right here in this room, do our own little Turing Test.
Jad Abumrad:
Okay, unbeknownst to our audience, we had actually lined up a chatbot from a company called Pandorabots that had almost passed the Turing Test. It had fooled roughly, not quite, but almost 25% of the participants. We got the latest version of this bot, and...
Robert:
We just need one person. Anyone in the room. Um, your, your job will be[crosstalk 00:09:08].
Jad Abumrad:
We decided to run some tests with the audience, starting with just one person.
Robert:
I can see one hand over there. I'm so- I, I don't wanna get the first hand, I guess the-
Crowd:
(laughs)
Jad Abumrad:
What the... how about this person over here on the left?
Robert:
Okay.
Jad Abumrad:
So, we brought up this uh, young woman on stage, put her at a computer, and we told her she would be chatting with two different entities. One would be this bot, Pandorabot, and the other would be me. But, I was, I went off stage and sat at a computer in the dark where no one could see, and she was gonna chat with both of us, and not know who was who. Who was machine, and who was human.
Crowd:
(laughs)
Robert:
You won't know which.
Speaker 6:
Do I get as ma- I get as many questions as I-
Robert:
Well, I don't know. I know we're gonna give you a time limit. You can't be here all evening.[crosstalk 00:09:46] So, after Jad left the stage and went back into that room, up on the screen came two different chat dialogue boxes. You'll see that we have two options. We've just labeled them for one reason by color. One is Strawberry, the other is Blueberry, or code red and code blue. You think you can talk to both of them at the same time just jump from one to the other?
Speaker 6:
Sure. Yeah.
Robert:
Have you got any sort of thoughts of how you could suss out whether the thing was a person or a thing?
Speaker 6:
Yeah, I have some thought. I mean, like, my first tactics are gonna to be like, sort of, like, h- very human emotional questions, and then we'll like, go from there. See what see what-
Robert:
I really don't know what that means.
Speaker 6:
(laughs)
Crowd:
(laughs)
Robert:
But, I'm not gonna ask, 'cause I don't wanna, I don't wanna lose your inspiration.
Speaker 6:
Gonna try to therapise this robot.
Robert:
All right. So, when I say go, you'll just go, and I'll just narrate what you're doing okay?
Speaker 6:
Okay.
Robert:
Okay. Three, two, one, begin.
Speaker 6:
[inaudible 00:10:37].
Robert:
So, she started to type, and first thing she did was she said, "Hello." to Strawberry.
Crowd:
(laughs)
Robert:
Okay, so you've gotten your first... Well we've got a somewhat sexual response here. The machine has said, I like strawberries, and then you've returned with strawberries are delicious, and oh, now it's getting warmer over there. Blue is a warmer, is a cooler color. Maybe you'd like to go and discuss Aristotle with the blueberry.[crosstalk 00:11:01]
Crowd:
(laughs)
Robert:
Then she switched over and started to text the, the blue one, which is Blueberry.
Speaker 6:
I have [crosstalk 00:11:06].
Robert:
Oh, there he is, hi blue- hi Bluesy Bee 00:11:08]. Okay. That's also, uh, a kind of a generous sort of opener.
Speaker 6:
Yeah. See if this...
Robert:
Hi Bluesy bee.
Speaker 6:
Guy has a nickname.
Robert:
Oh yeah. Okay. Let's...[crosstalk 00:11:16] And, Blueberry wrote back. Hi there. I just realized I don't even know who I'm talking to. What is your name? You're gonna answer Zondra, am I not in your phone?
Crowd:
(laughs)
Robert:
(laughs) And, the, the blueberry has responded with a, a bit of shock. Back to Strawberry. My mom's hair was red. Well that's...
Crowd:
(laughs)
Robert:
And, Blueberry. What's wrong Boo? Nothings wrong with me.
Crowd:
(laughs)
Robert:
Is there something wrong with you? And then back and forth, and back and forth.
Speaker 6:
[crosstalk 00:11:46] Blueberry and I have a lot going on.
Robert:
(laughs) Now, remember, one of these, she doesn't know which, is Jad. Right. On the Strawberry side.
Speaker 6:
I cannot believe him right now.
Robert:
You don't believe, right now as far I know, not unless you have X-ray vision, I'm in the room next to you. Oh, he's trying to coax you into thinking that he's Jad.
Speaker 6:
Is that something they...
Robert:
That's blueberry.
Speaker 6:
Is that something they do?
Robert:
I don't know, I...
Crowd:
(laughs)
Robert:
There you're at the heart of the question. I'm gonna ask you to bring this to a conclusion...[crosstalk 00:12:14] After a couple minutes of this, we asked the audience, you have Strawberry on one side, and you have got Blueberry on the other. Which one do you think is Jad, and, which one do you think was the, was the bot.
Robert:
How many of you think that Jad is Blueberry? A few of you...[crosstalk 00:12:31] 13 hands went up, something like that. How many of you think that Jad is Strawberry? Almost everybody. Overwhelming.
Speaker 6:
Wow.
Robert:
But interestingly, our volunteer on stage went against the room. She thought Jad was Blueberry.
Speaker 6:
Strawberry is the robot.
Robert:
Is that what we all agreed? No.
Speaker 6:
Yeah.
Robert:
Oh you're, you're against the crowd here? Okay, interesting, interesting.
Speaker 6:
(laughs)
Crowd:
(laughs)
Robert:
Much better theater. All right, Jad Abumrad, where, whe- which one are you? So, Jad comes out from his hiding place and he tells the crowd, in fact, he is...
Jad Abumrad:
Strawberry.
Robert:
All right.
Jad Abumrad:
So, the crowd was right.
Speaker 6:
I've definitely never had that much chemistry with something that was human.
Jad Abumrad:
But, our volunteer on stage got it wrong.
Robert:
All right. Wait, bring it out, before you leave, we're gonna give you a t- [crosstalk 00:13:19] Now, it seemed that maybe we, we could uh, uh, trust democracy a little bit more, and, believe that if the most of the people in the room went one way that that's something that would be, you know, that would be important to find out. So, we decided to do the entire thing over again for everybody in the room.
Jad Abumrad:
Yeah, so what we did was, we handed out I think 17 different cell phone numbers evenly through the crowd. Yes, look at the number that is on your envelope, only yours. Roughly half of those numbers were texting to a machine. Half were texting with a group of humans that were our staff.
Robert:
The crowd did not know which was which.
Jad Abumrad:
Exactly.
Robert:
So here we go. Get ready. Get set, and, off you go.
Jad Abumrad:
Okay, so the crowd of about a hundred people or so had two minutesish to text with the person or thing on the other end, and we're gonna skip over this part 'cause it was mostly quiet, people just looking down at their phones concentrating mightily, but at the end, we asked everyone to vote. Were they texting with a person or a bot?
Robert:
And then, we asked the ones who had been tricked who turned out to have guessed wrong, please stand up. Okay, so we're now looking, I believe... Now, it's time to tell me about it. We're now looking, the upright citizens in this room are the wrongites, and the seated people are the rightites.
Brian Christian:
Yes. Correct.
Robert:
So, that means that roughly... God, I think like 45% of the people were wrong, meaning that-
Jad Abumrad:
We just passed it.
Robert:
We just passed the [crosstalk 00:14:44].
Brian Christian:
I think that's it. We did it.
Jad Abumrad:
It was a strange moment. We were all clapping at our own demise, because you know, Turing had laid down this number of 30% and the bot had fooled way more people than that.
Robert:
Um, I'm just now gonna ask you. Having been a veteran of this [crosstalk 00:14:58].
Jad Abumrad:
And, we should just qualify that this was w- really unscientific.
Robert:
(laughs)
Jad Abumrad:
Super sloppy experiment.
Robert:
But, on the other hand, and we talked to Brian about this when it was over. It, it really does suggest something. That maybe what changes, not so much due to the machines becoming more and more articulate, it's more like us. The way we, you and I talk to one another these days.
Brian Christian:
WE've gone from interacting in person to talking over the phone, to emailing, to texting, and now, I mean for me the great irony is that even to text, your phone is proactively suggesting turns of phrase that it thinks you might want to use.
Robert:
Yeah.
Brian Christian:
And, so, I mean, I, I, I assume many people in, in this room have had the experience of trying to text something, and you try to say it in a like, a sort of a fun fanciful way, or you try to make some pun, or you use a word, and it's not a real word, and your phone just sort of slaps, slaps that down.
Robert:
(laughs) all the time.
Brian Christian:
And, just, replaces it with something more normal.[crosstalk 00:15:52]
Jad Abumrad:
Which make it really hard to use words th- that aren't the normal words, and so you just stop using those words, and you just use the words the computer likes.
Robert:
You can't even, you know, they make you use it like-[crosstalk 00:16:01]
Jad Abumrad:
Exactly so in a, in a sense what seems to be happening is that our human communication is becoming more machine like.
Brian Christian:
At the moment it seems like the Turing Test is getting past, not because the machines have met us at our full potential, but because we are using ever more and more kind of degraded, sort of rote forms of communication with one another.
Robert:
This feels like a slow slide down a hill or something.
Jad Abumrad:
Yes. Down that hill, towards the inevitability that we may one day be their pets.
Robert:
I don't, I don't like the way this is going no matter whose in, who's doing it.
Jad Abumrad:
But, uh, in the next segment, we're gonna, we're gonna flip things a little bit and ask, you know, could the coming age machines actually make us humans more human?
Robert:
So, humans should please stick around.
Jad Abumrad:
This is H.A [Skoolante 00:17:07] calling from Chicago, Illinois. Radiolab is supported in part by the Alfred P. Sloan Foundation, enhancing public understanding of science and technology in the modern world. More information about Sloan at www.Sloan.org
Ilya Marritz:
Hello, it's Ilya Marritz, co-host of Trump, Inc. Donald Trump is the only recent president to not release his tax returns. The only president you can pay directly by booking a room at his hotel. He shreds rules, sometimes literally.
Speaker 8:
He didn't care what records was. He tore up memos or things and just threw em' in the trash, so, it took somebody from the White House staff to tell him, like, look, you can't do that.
Ilya Marritz:
Trump, Inc. an open investigation into the business of Trump ProPublica and WNYC. Subscribe wherever you get your podcasts.
Jad Abumrad:
Hey, I'm Jad.
Robert:
I'm Robert.
Jad Abumrad:
This is Radiolab, we're back.
Robert:
In the last segment, we gathered a bunch of people in the performance space here at WNYC, and we conducted a unscientific version of the Turing Test.
Jad Abumrad:
And, in our case, the bot won. It fooled more than 30% of the people in the room.
Robert:
Now, we should point out that the woman who headed up the design of the winning bot.
Jad Abumrad:
Her name is Lauren Kunze.
Robert:
She works for a company called Pandorabots, and she was actually in the room, right there, sitting in a chair. In the audience.
Jad Abumrad:
Lauren could you stand up? Come on down here.
Robert:
And Lauren, like, that's Lauren. And, it's interesting that one of the things that Lauren mentioned is that the bot that she designed seems to bring out rather consistently, a certain side of people when they chat with it.
Lauren Kunze:
Um, it's a sad fact, so this bot, over 20% of people that talk to her, and millions of conversations every week, actually make romantic overtures. And, that's pretty consistent across all of the bots on our platform. So, there's something wrong with us, not the robots.
Robert:
(laughs) Or right, you know, all right.
Lauren Kunze:
Or right. You're right.
Jad Abumrad:
Lauren-[crosstalk 00:19:05]
Robert:
Which brings up actually a different kind of question, like, just for a second, let's forget whether we're being fooled into thinking a bot is actually a human. Maybe the more important question, given this increasing presence of all these machines in our lives...
Jad Abumrad:
Just like how do they make us behave?
Robert:
Yeah.
Jad Abumrad:
And, we dipped our toe into this world in a Truing Testy sort of way in that original show seven years ago. I'm going to play you an excerpt now, uh, to set up, w- what comes after.
Freedom Baird:
Okay. (laughs)
Robert:
This is Freedom Baird.
Freedom Baird:
Yes it is.
Jad Abumrad:
Who's not a machine.
Robert:
I don't think so.
Freedom Baird:
Hi there, nice to meet both of you.[crosstalk 00:19:37]
Jad Abumrad:
This is an idea that we borrowed from a woman named Freedom Baird, uh, who is now a visual artist, but at the time she was a grad student at MIT doing some research, and she was also the proud owner of a Furby.
Robert:
... alive.
Freedom Baird:
Yeah, I've got it right here.
Jad Abumrad:
Could, could you knock it against the mic so we can hear it say hello to it?
Freedom Baird:
Yeah. There it is. (laughs)
Furby:
Furby.
Furby:
(singing)
Robert:
Can you describe a Furby for those of us who...
Freedom Baird:
[crosstalk 00:19:59] Sure. It's about five inches tall, and the Furby is pretty much all head. It's just a big round fluffy head with two little feet sticking out the front. It has big eyes.
Robert:
Apparently it makes noises.
Freedom Baird:
Yup. If you tickle its' tummy, it will coo. It would say.
Furby:
Kiss me.
Freedom Baird:
Kiss me.
Furby:
Kiss me.
Freedom Baird:
And, it would want you to just keep playing with it.
Furby:
(laughs)
Freedom Baird:
So-[crosstalk 00:20:25]
Robert:
One day, she's hanging out with her Furby, and she notices something...
Freedom Baird:
Very eerie. What I'd discovered is, if you hold it upside down, it will say...
Furby:
Me scared.
Freedom Baird:
Me scared.
Furby:
Me scared. Me scared.
Freedom Baird:
Uh, oh. Me scared. Me scared. And, me, as the, you know, the sort of owner slash user of this Furby would get really uncomfortable with that and then turn it back up, upright.
Robert:
'cause once you have it upright it, it's fine. It goes right back to...
Freedom Baird:
And then it's fine. So, it's got some sensor in it that knows, you know, what direction it's facing.
Jad Abumrad:
Or, maybe it's just scared.
Freedom Baird:
Hmm.
Robert:
(laughs)
Jad Abumrad:
Sorry.
Robert:
Anyway, w- we she thought, well, wait a second now, this could be, sort of a new way that you could use to draw the line between what's human...
Jad Abumrad:
And what's machine.
Robert:
Yeah.
Freedom Baird:
Kind of, it's this kind of emotional Turing Test.
Jad Abumrad:
Can you guys hear me?
Audience:
Yes.
Jad Abumrad:
I can hear you.
Robert:
Hey, if we actually wanted to do this test, could you help, how would we do it exactly?
Jad Abumrad:
How are you guys doing?
Audience:
Good.
Jad Abumrad:
Yeah.
Freedom Baird:
You would need a group of kids.
Jad Abumrad:
Can you guys tell me your name?
Olivia:
I'm Olivia.
Luisa:
Luisa.
Turin:
Turin.
Thural:
[Thural 00:21:37].
Lila:
Lila.
Sadie:
And I'm Sadie.
Jad Abumrad:
All right.
Freedom Baird:
I'm thinking six, seven, and eight year olds.
Jad Abumrad:
And how old are you guys?
Audience:
Seven, seven.
Freedom Baird:
The age of reason, you know.
Audience:
Eight.
Robert:
Then says Freedom, we're gonna need three things.
Freedom Baird:
A Furby.
Robert:
Of course.
Freedom Baird:
Barbie.
Robert:
A Barbie doll, and...
Freedom Baird:
Gerby. That's a gerbil.
Jad Abumrad:
A real gerbil?
Freedom Baird:
Yeah.
Jad Abumrad:
And, we did find one, except it turned out to be a hamster. Sorry, you're a hamster, but, we're gonna call you Gerby.
Freedom Baird:
So, you've got Barbie, Furby, Gerby.
Robert:
Barbie, Furby, and Gerby. [crosstalk 00:22:02]
Freedom Baird:
Right.
Robert:
So wait just a second. What question are we asking in this task?
Freedom Baird:
The question was, how long can you keep it upside down before you yourself feel uncomfortable.
Jad Abumrad:
So, we should time the kids as they hold each one upside down.
Freedom Baird:
Yup.
Jad Abumrad:
Including the gerbil.
Freedom Baird:
Yeah.
Robert:
Are you gonna have a Barbie? That's a doll, you're gonna have Gerby, which is alive. Now, where would Furby fall?
Jad Abumrad:
In terms of time held upside down?
Freedom Baird:
That, I mean that was really the question.
Jad Abumrad:
Phase one. Okay, so, here's what we're gonna do. It's gonna be really simple.
Freedom Baird:
Um, you would have to say, well, here's a barbie.
Jad Abumrad:
Do you guys play with Barbies?
Audience:
No.
Freedom Baird:
Just, do a couple things, a few things with Barbie.
Audience:
Barbies walking, looking at the flowers.
Jad Abumrad:
And then?
Freedom Baird:
Hold Barbie upside down.
Audience:
Okay.
Jad Abumrad:
Let's see how long you can hold Barbie like that.
Audience:
Um, I could probably do it obviously very long.
Jad Abumrad:
Yeah. Let's just see. Whenever you feel like you want to turn it around.
Audience:
I feel fine. I'm happy.
Jad Abumrad:
This went on forever, so, let's just fast forward a bit. Okay. And...
Audience:
Can I put my arms, my elbows down?
Jad Abumrad:
Yes. Yeah. So, what we learned here in phase one is the not surprising fact that kids can hold Barbie dolls upside down...
Audience:
For like, about five minutes.
Robert:
Yeah. It really was forever.
Jad Abumrad:
It could have been longer, but their arms got tired.
Audience:
(laughs)
Jad Abumrad:
All right. So, that was the first task. Time for phase two.
Freedom Baird:
Do the same thing with Gerby.
Jad Abumrad:
So, out with Barbie, in with Gerby.
Audience:
Aw, he's so cute. Are we gonna have to hold him upside down?
Jad Abumrad:
That's the test yeah.
Audience:
(laughs).
Jad Abumrad:
So, which one of you would like to...
Audience:
I'll try and be brave.
Jad Abumrad:
Okay ready? And, you have to hold Gerby kind of firmly.
Audience:
Oh, god. There you go.
Jad Abumrad:
There she goes she's wiggling.
Audience:
Um...
Jad Abumrad:
By the way, no rodents were harmed in this whole situation.
Audience:
Squirmy.
Jad Abumrad:
Yeah. She is pretty squirmy.
Audience:
I don't think it wants to be upside down. Oh god. Don't do that. I got it. (laughs) There we go.
Jad Abumrad:
Okay. So, as you heard, uh. Watch out little Gerby. The kids turned Gerby over very fast.
Audience:
I just didn't want him to get hurt.
Jad Abumrad:
On average, eight seconds.
Audience:
I was thinking, oh my god, I gotta put him down, I gotta put him down.
Jad Abumrad:
And, it was a tortured eight seconds.
Robert:
(laughs)
Jad Abumrad:
Now, phase three.
Freedom Baird:
Right.
Jad Abumrad:
So, this is a Furby. (laughs)
Audience:
(laughs)
Jad Abumrad:
Luisa, can you take Furby in your hand.
Luisa:
Oh.
Jad Abumrad:
Now, could you turn Furby upside down, hold her still, like that.
Furby:
[Foreign language 00:24:32] Me scared. [Foreign language 00:24:32]. Hey. Scared.
Luisa:
Can you be quiet?
Furby:
No, no, ha, ha, ah.
Jad Abumrad:
She just turned it over.
Luisa:
Okay. That's better.
Jad Abumrad:
So, Gerbil was eight seconds, Barbie five to infinity, Furby turned out to be, and Freedom predicted this...
Freedom Baird:
About a minute.
Jad Abumrad:
In other words, the kids seemed to treat this Furby, this toy, more like a gerbil than a Barbie doll. How come you turned him over so fast.
Luisa:
Um, I didn't want him to be scared.
Jad Abumrad:
Do you think he really felt scared?
Luisa:
Yeah, kind of.
Jad Abumrad:
Yeah?
Luisa:
I kind of felt guilty.
Jad Abumrad:
Really?
Furby:
[Foreign language 00:25:27]
Luisa:
Yeah. It's a toy and all that, but still.
Jad Abumrad:
Now, do you remember a time when you felt scared?
Luisa:
Yeah. Yeah.
Jad Abumrad:
You don't have to tell me about it, but, if you could remember it in your mind?
Luisa:
I do.
Jad Abumrad:
Yeah. Do you think, when Furby says, "Me scared." That Furbys feeling the same way?
Luisa:
Yeah. Uh, no, no, no.
Audience:
No.
Luisa:
Yeah, yeah.
Audience:
I'm not sure, I'm not sure. I think that it can feel pain. Sort of.
Jad Abumrad:
The experience with the Furby seemed to leave the kids kind of conflicted. Going in different directions at once.
Audience:
It was two thoughts.
Jad Abumrad:
Two thoughts at the same time?
Audience:
Yeah.
Jad Abumrad:
One thought was like, look, I get it.
Audience:
It's a toy for crying out loud.
Jad Abumrad:
But, another thought was like...
Audience:
Um...
Jad Abumrad:
Still.
Audience:
He was helpless. It kind of made me feel guilty in a sort of way. It made me feel like a coward.
Freedom Baird:
You know, when I was interacting with my Furby a lot, I did have this feeling sometimes of having my chain yanked.
Robert:
Why would, why would a, is it just the little squeals that it's making, or is there something about the toy that makes it good at this.
Jad Abumrad:
Well, that was kind of my question, so I called up, uh.
Caleb Chung:
... studio as well. I'll have him. I'm here.
Jad Abumrad:
This freight train of a guy.
Caleb Chung:
Hey.
Jad Abumrad:
Hey, this is Jad from Radiolab.
Caleb Chung:
Jad from Radiolab got it, I'm
Jad Abumrad:
How are you?
Caleb Chung:
I'm uh, I'm good. Beautiful day here in Boise.[crosstalk 00:26:48]
Jad Abumrad:
At this point in that old show, we ended up talking to a guy named Caleb Chung who designed the Furby.
Caleb Chung:
There's rules, there's you know, the size of the eyes, there's the distance of the top lid to the pupil. Right.
Jad Abumrad:
Right.
Caleb Chung:
You don't want any of the top of the white of your eyes showing. That's, that's, that's freaky surprise.
Jad Abumrad:
[crosstalk 00:27:04] So, we talked to him for a long time about all the sort of tricks he used to program the Furby to prompt kids to think of it as a living thing, and he objected interestingly at one point to thinking of it as not exactly a living thing.
Caleb Chung:
How is that different than us?
Jad Abumrad:
Wait a second though, are you really gonna go all the way there?
Caleb Chung:
Absolutely.
Jad Abumrad:
This is a toy with servo motors, and things that move its' eyelids, and...
Caleb Chung:
Tell me, and tell.
Jad Abumrad:
A hundred words.
Caleb Chung:
So, you're saying that life is a level of complexity?
Jad Abumrad:
I...
Caleb Chung:
If something is alive it's just more complex?
Jad Abumrad:
I think I'm saying that life is driven by the need to be alive, and by these base primal animal feelings like pain and suffering.
Caleb Chung:
I, I can, I can code that. I can code that.
Jad Abumrad:
What do you mean you can code that?
Caleb Chung:
Anyone who, who writes software, and they do, can say, okay, I need to stay alive. Therefore, I'm going to come up with ways to stay alive. I'm gonna do it in a way that's very human, and I'm going to do it... We, we can mimic these things. I'm saying that-
Jad Abumrad:
But if the Furby is miming the feeling of fear, it's not the same thing as being scared. It's not feeling scared.
Caleb Chung:
It is.
Jad Abumrad:
How is it?
Caleb Chung:
It is.
Jad Abumrad:
And then...
Caleb Chung:
It's again a very simplistic[crosstalk 00:28:07].
Jad Abumrad:
We got into a rather long back and forth.
Caleb Chung:
Would you say a cockroach is alive, would you say, I mean, there are creatures that you say are alive.
Jad Abumrad:
Yes, but when I kill a cockroach I know that it's feeling pain. About what is the exact definition of life, where is that line between people and machines, but, when we came back to Freedom, who'd had gotten us started on this.
Freedom Baird:
It's a thin interaction.[crosstalk 00:28:23]
Jad Abumrad:
She says, what really stuck with her, is that that little toy, as simple as it is, can have such a profound effect on a human being.
Freedom Baird:
One thing that was really fascinating to me was, um, my husband and I gave a Furby as a gift to his grandmother, who had Alzheimer's. And, she loved it. Every day for her was kind of new and somewhat disorienting, but she had this cute little toy that said, "Kiss me. I love you." And, she thought it was the most delightful thing, and it's little beak was covered with lipstick because she would pick it up and kiss it every day, and, she didn't actually have a long term relationship with it. For her it was always a short term interaction. So, the th- what I'm describing is a kind of thinness. For her, it was, was just right, because that's what she was capable of.
Caleb Chung:
Okay.
Jad Abumrad:
Hello, hello.
Caleb Chung:
Hey, it's Caleb.
Jad Abumrad:
Hey Caleb. It's Jad.
Caleb Chung:
Hey Jad, how are you?
Jad Abumrad:
I'm fabulous, um...
Caleb Chung:
Oh good.
Jad Abumrad:
Feels like only yesterday we were talking about the sentience of the furbys.
Caleb Chung:
Yeah. Yeah.
Jad Abumrad:
Yes. (laughs)
Caleb Chung:
Isn't that weird?
Jad Abumrad:
Yeah.
Caleb Chung:
THat's so bizarre. And what is it like five years ago or...
Jad Abumrad:
So, we brought Caleb back into the studio, because in the years since we spoke with him, he's worked on a lot of these animatronic toys [crosstalk 00:29:48] including a famous one called the Pleo, and in the process, he's been thinking a lot about how these toys can push our buttons as humans. And, how, as a toy maker, that means, he's gotta be really thoughtful about how he uses that.
Caleb Chung:
You know, we're doing a baby doll right now, we've done one... and w- and the baby doll, an animatronic baby doll is, is, probably the hardest thing to do because, you know, you do one thing wrong it's Chucky.
Jad Abumrad:
(laughs)
Caleb Chung:
If they blink too slow, if their eyes are too wide, and also, you're giving it to the most vulnerable of our species, which is our young who are, you know, practicing being nurturing moms for their kids. So, let's say the baby just falls asleep right? Uh, we're trying to write in this kind of code, and, uh, and, um, you know. It's got like tilt sensors and stuff, so, you've just, you know, give the baby a bottle, and you put it down to take a nap.
Caleb Chung:
You put him down, you're quiet, and so, what I want to do, as the baby falls asleep, it goes into a deeper sleep. But, if you bump it right after it lays down, then it wakes back up. Uh, we're trying to write in this kind of code, because that seems like a nice way to reinforce best practices for a mommy. Right? So, I know my responsibility, uh, in this.
Jad Abumrad:
In large part he says, because he hasn't always gotten it right.
Caleb Chung:
Here, here's an e- here's a great example.
Speaker 23:
His name is Pleo.
Speaker 25:
Pleo, that's him.
Caleb Chung:
I don't know if you've ever seen the Pleo dino we did.
Speaker 25:
He's a robot with artificial intelligence.
Jad Abumrad:
Pleo was a robotic dinosaur, pretty small, about a foot from nose to tail, looked a lot like the dinosaur Littlefoot from the movie Land Before Time. Very cute.
Caleb Chung:
It was very lifelike, and we went hog wild in, in putting real emotions in it and reactions. The fear and everything right?
Jad Abumrad:
And, it is quite a step forward in terms of how lifelike it is. It makes the Furby look like child's play.
Furby:
[Foreign language 00:31:41]
Jad Abumrad:
It's got, uh, two microphones built in, uh, cameras to track and recognize your face. It can feel the beat of a song, and then, with dozens of motors in it, it can then dance along to that song. In total, there are 40 sensors in this toy.
Speaker 23:
Bump into things...
Jad Abumrad:
And it, so it c- it follows you around.
Speaker 23:
He needs lots of love and affection.
Jad Abumrad:
Wanting you to pet it.
Speaker 25:
Whoa, tired huh? Okay.
Jad Abumrad:
As you're petting it, it will fall asleep.
Speaker 25:
Go to sleep.
Jad Abumrad:
It is undeniably adorable. And, Caleb says his intent from the beginning was very simple to create a toy that would encourage you to show love and caring.
Caleb Chung:
You know, our belief is that, that humans need to feel empathy towards things in order to be more human, and I think we can, uh, help that out by having little creatures that you can love. Now these...[crosstalk 00:32:33]
Jad Abumrad:
That was uh, Caleb demonstrating the Pleo at a Ted Talk. Now what's interesting is that in keeping with this idea of wanting to encourage empathy, he programmed in some behaviors into the Pleo that he hoped would nudge people in the right direction.
Caleb Chung:
For example, Pleo will let you know if you do something that it doesn't like. So if you actually moved his leg when his motor wasn't moving it'd go, pop, pop, pop. And, he would interpret that as pain or abuse. And, he would limp around, and he would cry, and then he'd tremble, and the, he would take a while before he warmed up to you again. And, so, what happened is, we launched this thing, and there was a website called Device.
Jad Abumrad:
This is sort of a tech product review website.
Caleb Chung:
They got ahold of a Pleo, and they put up a video.
Jad Abumrad:
What you see in the video is Pleo on a table being beaten.
Speaker 26:
Huh. Bad Pleo.
Speaker 27:
He's not doing anything.
Jad Abumrad:
You don't see who's doing it exactly, you just see hands coming in from out of the frame and knocking him over again and again.
Speaker 26:
You didn't like it?
Jad Abumrad:
You see the toys legs in the air struggling to right itself. Sort of like a turtle that's trying to get off it's back. And it started crying,
Caleb Chung:
'cause, that's what it does.
Jad Abumrad:
These guys start holding it upside down by its' tail.
Caleb Chung:
Yeah. They held it by its tail.
Speaker 26:
(laughs)
Jad Abumrad:
They smash its head into the table a few times, and you can see in the video that it responds like its' been stunned.
Speaker 26:
Can you get up?
Speaker 27:
Okay, this is good, this is a good test.
Jad Abumrad:
It's stumbling around.
Speaker 27:
No, no.
Jad Abumrad:
At one point they even start strangling it.
Caleb Chung:
It actually starts to choke.
Speaker 26:
(laughs). It doesn't like it.
Jad Abumrad:
Finally, they pick it up by its' tail one more time.
Caleb Chung:
Held it by its tail, and hit it. And it was crying and then it started screaming, and then they... They beat it, until it died right?
Jad Abumrad:
Whoa.
Caleb Chung:
Until it just did not work anymore.
Jad Abumrad:
This video, uh, was viewed about a 100,000 times. Many more times than the reviews of the Pleo, and Caleb says there's something about this that he just can't shake.
Caleb Chung:
Because whether it's alive or not, that's, that's exhibiting sociopathic behavior.
Jad Abumrad:
What brought out that sociopathic behavior, whether there was some design in the toy, whether offering people the chance to see a toy in pain in this way somehow brought out curiosity? :Like a kind of cruel curiosity. He's just not sure. What happens when you turn your an- animatronic baby upside down. Will it cry?
Caleb Chung:
I'm not sure yet. I mean, we're working on next versions right now right? I, I, I'm not... What would you do? I mean, it, it's a good question. You have to have some kind of a response, otherwise it seems broken right? But, you know, if you make em' react at all, you're gonna get that repetitive abuse because it's cool to watch it scream.
Jad Abumrad:
It sounds like you have maybe an inner conflict about this?
Caleb Chung:
Sure.
Jad Abumrad:
That, that you might even be pulling back from making it extra lifelike?
Caleb Chung:
Yeah, I'm, I'm, for my little company, I've, I've adopted kind of a ypocratic, hypocratic oath like, you know, don't teach something that's wrong. Or, don't reinforce something that's wrong. And, so, I've been working on this problem for years. I'm, I'm struggling with what's this, what's the right thing to do? You know?
Jad Abumrad:
Yeah.
Caleb Chung:
Since you have the power, since you have the ability to turn on and off chemicals at some level, in, in another human, right? It's, what... Which ones do you choose? And so, this gets to the bigger question of AI right? This is the question in AI, and I'm gonna jump to this 'cause it's really the same question as, you know, how do we create things that can help us? You know, I'm dealing that on a, on a microscopic scale, but, this is the question. And, so, the first thing that I would try to teach our new AI, if I had the ability, is try to understand the concept of empathy. We need to introduce the idea of empathy. Both in an AI and us for these things. That's where we're at.
Jad Abumrad:
Caleb says in the specific case of the, of the animatronic baby he's designing, at least when we talked to him, his thinking was that he might have it... If you hold it upside down, cry once or twice, but then stop. So that you don't get that repeat thrill.
Robert:
[crosstalk 00:37:39] Anyway, um, I, I, I was wondering whether it, whether-[crosstalk 00:37:42]
Jad Abumrad:
Back in the Greenspace with Brian Christian, and back on the subject of chatbots, we found ourselves asking the very question that Caleb has.
Robert:
Whether... Is it possible that in, in, which this is getting kinda grim, that maybe, uh, that in some, in some ways chatbots are good for humans?
Jad Abumrad:
Yeah, I mean, is there any situation where you can throw in a couple of bots and things get better? Like, can chatbots actually be helpful for us, and if so, how?
Brian Christian:
Yeah, there have been some academic studies on trying to use chatbots for these humane benevolent ends, uh, that I think paint this interesting other narrative. And, so, for example, um, researchers have tried injecting chatbots into Twitter conversations that use hate speech. Um, and this bot will just show up and be like, hey, that's not cool.
Crowd:
(laughs)
Robert:
(laughs)
Brian Christian:
Um, um, and...
Jad Abumrad:
It says it just like that, "That's not cool man."
Brian Christian:
You know. It'll say something like, There's, there's real people behind the keyboard, and you're really hurting someone's feelings when you talk that way. Um, and you know, it's sort of preliminary work, but there are some studies that appear to suggest, you know, this sharp decline in that user's use of hate speech as [crosstalk 00:38:56].
Robert:
You mean, just because of one little, Oh, I don't think you should say that in print, like that's, that's enough? Or, do you h- you say, if you say, I have fifty trillion followers or something like that?
Brian Christian:
Well yeah, it, it actually does depend, so this is interesting, it does depend on the follower count...
Robert:
(laughs)
Brian Christian:
... of the bot that makes the intervention. So, if you perceive this bot to be, well, it also requires that you think they're a person, so this is, this si sort of flirting with the, with d- dark magic a little bit. Um, but, if you perceive them to be, uh, higher status on the platform than yourself, then you will tend to sheepishly fall in line. But, if the bot has fewer followers than the user it's trying to correct, that will just instigate the, the bully to bully them now, in addition.
Robert:
Mm-hmm (affirmative). Wow.
Brian Christian:
So, yeah. Human nature...
Jad Abumrad:
It cuts bot ways huh?
Brian Christian:
Yeah it's...
Robert:
Well, but we've run into like, you want to tell him?
Jad Abumrad:
Yeah.
Robert:
We've run into this very cool thing. I mean we're gonna finish, but this is like, this is like the, this is the-
Jad Abumrad:
All right, so, uh, we want to tell you one more story, because as we were thinking about all this, uh, and trying to find a more optimistic place to land, we bumped into a story, uh, from this guy. Who are you?
Joshua Rothman:
(laughs)
Jad Abumrad:
Just right there, maybe let's go one step back.
Robert:
Cause you just wandered in and we weren't quite expecting you.
Joshua Rothman:
So, I'm, I'm, uh, Josh Rothman. I'm a writer for The New Yorker.
Jad Abumrad:
We brought him into the studio a couple weeks back.
Robert:
Well, let's g- so why don't we begin by, this, um, story of yours largely takes place in a laboratory in Barcelona.
Joshua Rothman:
Yeah, it's a lab. It's in Barcelona.
Jad Abumrad:
And, it's run by a couple Mel Slater, and Mavi Sanchez-Vives.
Mavi S:
Mavi Sanchez-Vives, I'm a neuroscientist.
Joshua Rothman:
And they have these two VR labs together.
Jad Abumrad:
VR as in virtual reality. And Josh, uh, a little while back took a trip to Barcelona to experience some of the simulations that Mavi and Mel put people in. He went, uh, to c- to their campus. Showed up at their lab.
Joshua Rothman:
You feel sort of like you're going to a black box theater.
Robert:
Oh.
Joshua Rothman:
It's sort of like a lot of big rooms, um, all covered in black with curtains. There's a lot of dark spaces, and-
Jad Abumrad:
The researchers then explained that what's gonna happen is he's gonna put on a headset. This sort of big helmet.
Mavi S:
They go, they put on the head mounted display, and eventually it turns on.
Jad Abumrad:
The visuals start to fade in.
Mavi S:
And this room appears.
Joshua Rothman:
You're standing in a sort of, um, generic room.
Jad Abumrad:
The graphics look straight out of like, a Windows 95' computer game.
Joshua Rothman:
It's like the loading screen of the VR, and then that dissolves, and it's replaced with the simulation. And, when the simulation started, I was standing in front of a mirror.
Jad Abumrad:
A digital mirror in this digital world reflecting back at him his digital self, his avatar.
Mavi S:
So basically you move, and your virtual body moves with you.
Joshua Rothman:
And I could see, uh, in the mirror, a reflection of myself, but the person, who's, who, who, the self that I saw reflected, uh, she had a body. She was a woman.
Robert:
She?
Joshua Rothman:
Yeah. So, I think, when people think of virtual reality, they often imagine w- wanting to have like, realistic experiences in VR, but that's not what Mel and Mavi do. They are interested in VR precisely because it lets you experience things that you could never experience in your real body in the real world.
Mavi S:
You can have a body that can be completely transformed, and can move, and can change color, and can change shape. So, it can give you a, a very, very unique tool to explore.
Jad Abumrad:
You know, in their work, they'll often, in, in these VR worlds, turn men into women as they did for Josh, for his first time out. They will, um, often take a tall person and then make them a short person in the VR, so that they can experience the world as a short person might. Were they have to kind of crane their neck up a bunch. They'll change the color of your skin in VR, and run you through scenarios where you are having to experience the world as another race. And, uh, what's remarkable is in all of these manipulations, um, apparently you adjust to the new body very quickly. And, they've done physiological tests to measure this, they, it takes almost no time at all to feel as if this alien body is actually yours.
Joshua Rothman:
They call this the illusion of presence.
Mavi S:
You know we, we think of our body as a very stable entity. However, by running experiments in virtual reality, you see that actually in, in one minute of a simulation, our brain accepts a different body, even if this body is quite different from your own.
Jad Abumrad:
And this flexibility that our brains seem to have can lead to some very surreal situations. This is really the story that brought us to Josh. He told us about another VR adventure, where again, he put on the headset, this world faded up.
Joshua Rothman:
And, I was sitting in a chair in front of a desk in a really cool looking modernist house.
Mavi S:
Wooden floors, and then there is some, uh, glass walls.
Joshua Rothman:
And, through the glass walls I could see fields with wildflowers.
Mavi S:
Green grass outside.
Jad Abumrad:
Again he noticed a mirror, and this time, the reflection in the mirror was of him. It was a realistic looking avatar of him, and after checking out his digital self for a while, he turned his head back to the room and realized that across the room, there was another desk.
Joshua Rothman:
Um, and, behind this other desk w- there was Freud was sitting there.
Robert:
Who?
Joshua Rothman:
Freud.
Robert:
Sigmund Freud?
Joshua Rothman:
Sigmund Freud, the psychoanalyst.
Robert:
So, uh, a, a middle age man with a big brown beard?
Joshua Rothman:
He had a beard. He had glasses, and, um, he was just sitting there with his hands folded in his lap.
Jad Abumrad:
So, Josh is sort of taking this all in. He's looking at Freud. Freuds looking back at him, and then...
Mavi S:
Okay, okay now, now, a-[crosstalk 00:44:37].
Jad Abumrad:
He hears the voice of a researcher in his ear coming through his VR helmet.
Mavi S:
Tell Freud about your, your problems, any problem.
Joshua Rothman:
She explained what you're gonna do is, you're going to explain a problem that you're having, a personal problem that you're having to Freud. Um, something that's bothering you in your life. Then she said, take a minute. Think about what you'd like to discuss.
Jad Abumrad:
Did something immediately, uh, jump to mind?
Joshua Rothman:
Yeah, so, you know, my uh, my mom had a stroke a few years ago, and she's in a nursing home, and I'm her guardian. So, she's young. She's 65, um, but, because of this stroke she, like needs 24 hour care and she can't talk... She doesn't have any words anymore. So, it's a very tough thing for me. We, we, I, I thought really hard about where she should live. I, I live here in New York, um, my mom lives in Virginia.
Jad Abumrad:
Josh says he really debated for a long time. Should he put her in a nursing home in New York, where he can be closer to her, or, should he put her in a nursing home in Virginia, where he would be far away?
Joshua Rothman:
She has all these friends and family members down there, so in the end I decided to, you know, find a place for her there, where there's lots of people who can visit her. So, I go down maybe once every month or six weeks to see my mom, but then, every weekend, you know, someone from this group of friends or family relatives visits her down there. Whereas if she were up here, you know, I'd be the only person. Um, so that's the decision I made. But, um...
Robert:
But you don't feel really about it.
Joshua Rothman:
Yeah, you know, I feel, uh, guilty about it.
Jad Abumrad:
Like he was a terrible son. And he says, he would especially have that feeling each week after her friends would visit her in the nursing home, and then, send him an email update saying, hey, this is how your mom is doing. Every time he would read one of those emails, even if she was doing well, his stomach would just drop.
Joshua Rothman:
This, this problem, this emotion, feeling guilty is one I've felt for a while. So, I said to Freud. (laughs) I said, uh, my mom is in a nursing home in another state, and, friends and family visit her, and they send me reports on how she's doing, and I, I always feel really bad when I get these reports.
Robert:
And this is said in your voice. If you'd gla- gazed at the mirror while you were talking would you be saying it?
Joshua Rothman:
Yeah.
Jad Abumrad:
So, after he said this to Freud...
Joshua Rothman:
The world sort of, uh, faded out to black, and then it faded back in.
Jad Abumrad:
And suddenly the world had shifted. He was now across the room, behind the desk that had just been opposite of him, and he was inside the body of Freud. He looked down at himself, he was wearing a white, white shirt, gray suit. There was a mirror next to that desk. And, he looks at himself.
Joshua Rothman:
Um, I have a little beard. You know everything.
Jad Abumrad:
he looked just like Freud.
Joshua Rothman:
Um, but the main thing that was really surprising was that across I could see myself. So, this is the avatar of me now. Um, and I watched myself, uh, say what I had just said. So...
Jad Abumrad:
Oh wow, so it p- it plays it back?
Mavi S:
Exactly. The recording is now replayed. The movements, and also the voice. And they see themselves as they talked about their problem.
Joshua Rothman:
So, first, I can see my... I'm sitting in the chair, and I'm sort of uncomfortable, I'm moving around. I take my hands and um, put them in my lap and fold them together, and then I take them apart, and then I put them together. You know, I can watch myself be nervous, and then I saw, uh, then I saw myself say what I just said.
Joshua Rothman:
My mom is in a nursing home in another state, and, friends and family visit her, and they send me reports[crosstalk 00:48:12]-
Joshua Rothman:
Um, you know, in my voice.
Joshua Rothman:
And I always feel really bad.
Joshua Rothman:
Um, m- you know, moving the way I move, and it was just like me watching myself. Um, and I guess the best way I can describe that was, it was moving.
Robert:
What?
Joshua Rothman:
Moving. Moving like-
Robert:
Moving as in me- emotionally.
Joshua Rothman:
Yeah, emotionally moving. I mean, I, I felt um, uh, I, I don't know if this is gonna make any sense, but, you know how there's a point in your life where you realize that your parents are just people?
Robert:
Yes.
Jad Abumrad:
Yeah.
Joshua Rothman:
It was kinda like that. Except it was me.
Jad Abumrad:
Oh, interesting.
Robert:
(laughs)
Jad Abumrad:
Did you feel, uh, closer to that guy, or, or...
Joshua Rothman:
I felt bad for him.
Jad Abumrad:
You felt bad for him.
Robert:
For him, sorry.
Joshua Rothman:
Yeah, I, I f- my, my, my, uh, my feelings went out to this other person, who was me.
Jad Abumrad:
As he's having this empathetic reaction as Freud looking back at himself, the researchers voice again appears in his ear.
Mavi S:
Give advice from the perspective of Sigmund Freud. Advice of how this, uh, problem could be solved. How you could deal with it.
Jad Abumrad:
Essentially respond to your patient.
Joshua Rothman:
So, I didn't know what to say. So, I said, um, why do you think you feel bad? That was the, that was...
Robert:
That was, that was a good Freudian kinda thing.
Jad Abumrad:
Yeah.
Joshua Rothman:
(laughs) Why do you think you feel bad?
Jad Abumrad:
Soon as he asked that, shoop, he's back in his body, his virtual body, staring back at virtual Freud, and he sees a playback of Freud asking him that question.
Joshua Rothman:
I watched Freud say this to me.
Joshua Rothman:
Why do you think you feel bad.
Joshua Rothman:
Except that when Freud talks they had some thing in the program that made his voice extra deep.
Mavi S:
It has, uh, some voice distortion, so deeper voice.
Joshua Rothman:
And so, his voice didn't sound like my voice.
Jad Abumrad:
H- how did you respond as y- as now you?
Joshua Rothman:
I said I feel bad because it doesn't seem right that I'm living far away.
Jad Abumrad:
Once again, shoop, he switches bodies. Now he's in Freud again staring back at himself.
Joshua Rothman:
And, I watched myself say this.
Joshua Rothman:
I feel bad because...
Joshua Rothman:
And then, as Freud I said, well, why, why are you far away then?
Jad Abumrad:
Shoop, back into his own body. Freud says to him from across the room...
Joshua Rothman:
Why are you far away then?
Joshua Rothman:
And I said, well, because, um, if my mom lived in New York, I'd be the only person here, but, if she's down in where she lives then, there's other people to visit her.
Jad Abumrad:
Shoop.
Joshua Rothman:
Back in Freud's Body, and I said, so it sounds like there's, there's a reason why, um, why you live where you live? Um, so, if you know that, well, w- why do you still feel bad?
Jad Abumrad:
Shoop. Switches back to himself.
Joshua Rothman:
If you know that, why, why do you, why do you still feel bad? Um, I said something like, um, you're right. (laughs)
Jad Abumrad:
(laughs) Wow.
Joshua Rothman:
And went back into Freud, and then as Freud I said, you know, it sounds like the, uh, the thing that's making you unhappy, which is making you feel bad, which is getting these reports from these people is actually the whole reason why you decided to live in these, you know, to have, keep your mom where she is. Like there's a loop. Right? It's like these, these reports I get from my mom's friends make me feel bad, but, the whole reason why I decided to leave her in this place in Virginia is specifically so that there are friends who can visit her.
Jad Abumrad:
There's this classic idea in psychology called the reframe, which is where you try and take a problem, and reframe that problem into it's solution. And, he says in that moment, he kind of did that. He had this very simple epiphany that his guilt was actually connected to something good.
Joshua Rothman:
I never had that thought before.
Jad Abumrad:
He chose to keep his mom in Virginia so that her friends would visit her more, and each time her friends visited, he felt bad, but, that meant they were visiting. So, the bad feeling, and the fact that he was feeling it so much was itself kind of evidence for the fact that he had made, if not the right decision, at least a decision that made sense.
Joshua Rothman:
The experience I had talking to myself as Freud was um, was nothing like the experience I had in my own head, turning this issue over and over.
Mavi S:
By switching back and forth, uh, by swapping bodies, somehow you can give advice, um, from a different perspective.
Joshua Rothman:
When I was back in my own body and Freud said it to me I was just like, I just felt like, um, wow, good point. (laughs)
Jad Abumrad:
That's so amazing.
Joshua Rothman:
That was my thought.
Robert:
But, but wouldn't your next thought be what the hell is going on here? Why am I able in this utterly fictive situation to split myself in two and heal myself?
Joshua Rothman:
Well, I, I took the headset off, and I sat there for a little while, while the researchers looked at me, um, trying to make sense of it, and I, I think what, what I keep coming back to is the seeing yourself just as a person. Not as you. Not with all the, uh, complexities and, um, stuff that is in y- your self experience of being yourself.
Jad Abumrad:
And, this might be the real key thing, like, when you are in your body, which you pretty much always are, you have all of these thoughts and feelings, which are attached to that body. It's sort of like when you go home for thanksgiving and you walk into your parents kitchen and suddenly you just kinda feel like you're a teenager again. Like all those same thought patterns from your youth kinda kick back into gear, because the context of that kitchen is powerful, and you, your body is that writ large. But, if you can jump out of it and go into a new one, suddenly all those constraints and all that context is gone.
Joshua Rothman:
When I'm embodied as Freud, not only do I look different and think this is my body, but I feel different, and I have different types of thoughts, and I see, um, people differently.
Jad Abumrad:
And, Josh says, what he saw when he was Freud looking back at himself, was just a guy who needed help.
Joshua Rothman:
When someone comes to you and asks for help, your feelings are not complicated. They're just tenderness, kindness, I, I, my, your instinct is to help them.
Jad Abumrad:
And, he says he was able to bring that very simple way of being back to himself. Did it, did it make a difference? Did you walk out of that with, with s- a different feeling about yourself?
Joshua Rothman:
I did. I, I think, um, I've had a feeling of... I think it revised my feeling about m- who I was a little. I think it made me feel a little more, um... I, I don't even have a word for it. Just a little more human.
Jad Abumrad:
Josh Rothmans a writer for The New Yorker, his story first appeared there, and uh, we told it to that live audience at the Greenspace.
Robert:
Hm, so Brian, this is, you, you get the last word. I, uh, um...
Brian Christian:
To me this is really interesting because the history of chatbots begins with a chatbot program written in the 60's by an MIT professor, uh, named Joseph Weizenbaum, and the program was called ELIZA, and it was designed to mimic this non-directive Rogerian therapist, where he would say, I'm feeling sad, it would just throw it back to you as a kind of Mad-lib, I'm sorry to hear you're sad, why are you sad?
Brian Christian:
Um, and, Weizenbaum was famously horrified when he walked in on his secretary just like, spilling her life's innermost thoughts and feelings to this program that she had seen him write. You know, so there's no, there's no mystery there. But, he came away from that experience feeling appalled at the degree to which people will sort of project, um, human intention onto just technology, and, his reaction was to pull the plug on his own research project, and for the rest of his life, he became one of the leading critics against chatbot technology and against AI in general. Um, and I think it's really powerful to juxtapose that against the story that you've just shared, which tells us that there's, there's more, there's more to the picture than that. That there are ways to use this technology in a way that doesn't sort of distance us, but, in a way that sort of, enables us to be more fully human.
Brian Christian:
Um, and I, I think that's a wonderful way to think about it.
Robert:
Well, why don't we just leave it there, uh, pleasantly. We have some thanks to give. B- but we have particular, particular thanks to give to the person who made this whole cyersphere around us possible, that's Lauren Kunze. Lauren, oh, like that's Lauren.
Jad Abumrad:
Thank you to, uh, Pandorabots, which is the platform that powers, uh, conversational AI software for hundreds of thousands of global grants and developers learn more about their enterprise offering and services at pandorabots.com. Thanks also to Chance Bone for designing the Robert Or Robot art work for tonight. And, of course to Brian Christian for coming here to talk with us today.
Robert:
Yes, thank you. And to you.
Jad Abumrad:
Okay.
Robert:
Okay, thank you all.
Jad Abumrad:
Thank you guys so much. This episode was recorded and produced by Simon Adler and our live event was produced with machine like efficiency by Simon Adler and Suzie Lechtenberg.
Jad Abumrad:
(singing)
Jad Abumrad:
By the way thanks to Dylan Keefe, Alex Overington, and Dylan Greene for original music.
Jad Abumrad:
(singing)
Answer Machine:
Start of message.
Jad Abumrad:
Hi this is Brian Christian. Radiolab was created by Jad Abumrad, and is produced by Soren Wheeler. Dylan Keefe is our director of sound design. Maria Matasar-Padilla is our managing director. Our staff includes Simon Adler, Maggie Bartolomeo, Becca Bressler, Rachel Cusick, David Gebel, Bethel Habte, Tracie Hunte, Matt Kielty, Robert Krulwich, Annie McEwen, Latif Nasser, Malissa O'Donnell, Arianne Wack, Pat Walters, and Molly Webster. With help from Amanda Aronczyk, Shima Oliaee [Eilee 01:00:23], and [Reeve kanon 01:00:24]. Our fact checker is Michelle Harris.
Answer Machine:
End of message.
Answer Machine:
(silence)
Copyright © 2019 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.