
May 17, 2018
Transcript
[RADIOLAB INTRO]
JAD ABUMRAD: I'm Jad.
ROBERT KRULWICH: I'm Robert.
EMCEE: Um, are you guys ready to do this? Maybe we should just do this?
JAD: This is Radiolab.
EMCEE: All right, but when your hosts come out, I need you to seriously clap like you've never seen two dudes with glasses talking on a microphone okay?
[laughter]
EMCEE: Okay? So like, just really, really give it up for your mostly human hosts, Jad Abumrad and Robert Krulwich.
ROBERT: So about a week ago we gathered, I guess, roughly a hundred people...
JAD: Hello!
ROBERT: ... into a performance space, which is in our building here at WNYC. It's called the Greene Space.
JAD: Yeah.
ROBERT: This is like a, like a playground for us so we can just try things.
JAD: We decided to gather all these people into a room on a random Monday night. What else are you doing on a Monday right, because seven years previous we had made a show called talking to machines, which was all about, like, what happens when you talk to a computer that's pretending to be human.
ROBERT: Right.
JAD: And, the thing is, so much has happened since we made that show, with the proliferation of bots on Twitter. Russian bots meddling in elections. The advances in AI. So much interesting stuff had happened that we thought, it is time to update that show.
ROBERT: And we needed to do it live we thought, because we had a little plan in mind. We wanted to put unsuspecting people into a room for a kind of showdown between people and machines. But we, we want to set the scene a little bit and give you, uh, just a flavor of what we're really gonna be…
ROBERT: Just to start things off, we brought to our stage one of the guys who inspired that original show.
JAD: Please welcome to the stage writer Brian Christian.
ROBERT: Just so we can just—just get things sort of oriented, we need to first of a ll just redefine what a chatbot is.
BRIAN CHRISTIAN: Right. So, A chatbot is a computer program, uh, that exists to mimic and impersonate human beings.
ROBERT: Like, when do I run into them?
BRIAN CHRISTIAN: You go to a website to interact with some customer service. You might find yourself talking to a chatbot. Um, the US Army has a chatbot called Sgt. Star that recruits people.
JAD: Now, can I ask you a question about the thing you just said about chatting with customer service?
BRIAN CHRISTIAN: Yeah.
JAD: Which I end up doing a lot.
[laughter]
JAD: Um, I'm sorry. Which is I, you know, like, it's the middle of the night, you're trying to figure out some program and it's not working, and then suddenly there's like, "Need to chat?" And you click on that.
ROBERT: What do you mean suddenly there's need to chat?
JAD: Well, it's like you, you're, whatever.
[laughter]
ROBERT: Okay.
JAD: I assume many of you have had this experience, uh ...
ROBERT: I've had very few of the experiences that he's had, so there's just the, that issue always.
JAD: I'm always curious it, it, what... it seems very human when you're having that, that conversation with a, with a customer service chat bot. Is there a, is there a place where it... Where is the line between human and robot. Seeing that they're both present?
BRIAN CHRISTIAN: Yeah. Well, this, this is the question right? So, we're now sort of accustomed to having this uncanny feeling of not really knowing the difference. My guess for what it's worth is that there's a system on the back end that's designed to sort of do triage, where, the first few exchanges that are just like, hey, how can I help? What's going on, it seems like there's and issue with the such and such. Um, that is basically just a chatbot, and at a certain point, you kind of seamlessly transition and are handed off to a real person.
JAD: Mm-hmm.
BRIAN CHRISTIAN: But without any, you know, notification to you that this has happened. It's deliberately left opaque at what point that happens.
JAD: Wow.
ROBERT: And this is literally everywhere.
BRIAN CHRISTIAN: It is, and I mean, and you can't get on social media and read some comment thread without someone accusing someone else of being a bot.
ROBERT: [laughs]
BRIAN CHRISTIAN: And, you know, it seems, uh, it seems maybe sort of trivial at some level, but, we are now living through this political crisis of. how do we kind of come to terms with the idea that we can, you know, weaponize this kind of speech, and, how do we as consumers of the news or as users of social media try to suss out whether the people we're interacting with are in fact who they say they are.
ROBERT: And all this confusion about what's the machine and who's the human, it can get very interesting. In the context of a famous thought experiment named for a great mathematician named Alan Turing, Brian told us about this, it's called the Turing Test.
BRIAN CHRISTIAN: Alan Turing, he makes this famous prediction back in 1950 that we'll eventually get to a point sometime around the beginning of this century where we'll stop being able to tell the difference.
JAD: Well, what specifically was his, sort of, prophecy?
BRIAN CHRISTIAN: His specific prophecy was that, by the year 2000, uh, after five minutes of interacting by text message with a human on one hand and a chatbot on the other hand, uh, 30% of judges would fail to tell which was the human and which was the robot.
ROBERT: Is 30 just like a soft kind of-
BRIAN CHRISTIAN: 30 is just what Turing imagined, and, he predicted that as a result of hitting this 30% threshold, we would reach a point, he writes, where, one would speak of machines as being intelligent without expecting to be contradicted. Um, and this just existed as kind of a, part of the philosophy of computer science until the early 1990s when, into the story steps Hugh Loebner, a rogue multi-millionaire disco dance floor salesman.
ROBERT: (laughs) A what?
BRIAN CHRISTIAN: (laughs) a rogue millionaire, plastic portable light up disco dance floor salesman, he like ...
ROBERT: You mean like the Bee Gees kind of s-
BRIAN CHRISTIAN: Yeah.
JAD: Wow.
ROBERT: The, the lighting, the floor that lights up?
BRIAN CHRISTIAN: Yeah. But portable. (laughs)
[laughter]*
ROBERT: But portable. You can make a... You can be a rogue millionaire from that?
BRIAN CHRISTIAN: There's apparently millions to be made if, if, if only, if only you knew. Um, and, um, Hugh Loebner, this eccentric millionaire, uh, decides that we- this was in a- about 1992, that the technology was starting to get to the point where it would be worth, not just talking about the Turing Test as this though experiment, but, actually convening a group of people in a room once a year to actually run the test.
JAD: Now a bit of background. During the Loebner competitions, the actual Loebner competitions, how it usually works is that you've got some participants. These are the people who have to decide what's going on. They sit at computers, and they stare at the computer, and they chat with someone on a screen. Now, they don't know if someone they're chatting with is a person or a bot. Behind a curtain, you have that bot, a computer running the bot, and you also have some humans, who the participants may or may not be chatting with. They've gotta decide right? Are they chatting with a person or a machine?
JAD: Now Brian, many, many years ago actually participated in this competition. He was one of the humans behind the curtain that was chatting with the participants. And, when we talked to him initial many years ago, uh, for the Talking To Machines show, we went into all the strategies that the computer programs h- where using that year to try and fool the participants, but the takeaway was that, the year that he did it, the computers flopped. By and large, the participants where not fooled. They knew exactly when they were talking to a human, and when they were talking to a machine. And that was a while ago. In the Greene Space, we asked Brian, where do things stand now?
ROBERT: Has it, like, when we last talked to you, what, when did we last, when was it? 2011?
BRIAN CHRISTIAN: 2011.
ROBERT: 2011.
JAD: Has it, have we passed the 30% threshold si- in the intervening eight years?
BRIAN CHRISTIAN: So, in 2014, there was a Turing Test competition that was held at which the top computer program managed to fool 30% of the judges.
JAD: Wow.
BRIAN CHRISTIAN: And, so...
ROBERT: That's it right?
BRIAN CHRISTIAN: Depending on how you want to interpret that result, the controversy arose in this particular year, because the chatbot that won was claiming to be a 13 year old Ukrainian, who was just beginning to get a grasp on the English language.
JAD: Oh.
ROBERT: Oh, so the machine was cheating.
BRIAN CHRISTIAN: Right.
JAD: Well, that's interesting, so it masked it's computerness by, in broken grammar.
BRIAN CHRISTIAN: Yeah, exactly, right, or it, if it didn't appear to understand your question you started to have this story you could play in your own mind of, oh, well maybe I didn't phrase that quite right or something.
ROBERT: Has it been broke- has the, has it, there been a second winner, or a third winner, or a fourth winner or...
BRIAN CHRISTIAN: Um, to the best of my knowledge, we are still sort of flirting under that threshold.
ROBERT: Well, since we haven't had any victories since 2014, w- we thought we might just do this right here. Just right here in this room, do our own little Turing Test.
JAD: Okay, unbeknownst to our audience, we had actually lined up a chatbot from a company called Pandorabots that had almost passed the Turing Test. It had fooled roughly, not quite, but almost 25% of the participants. We got the latest version of this bot, and...*
ROBERT: We just need one person. Anyone in the room. Um, your, your job will be ...
JAD: We decided to run some tests with the audience, starting with just one person.
ROBERT: I can see one hand over there. I'm so- I, I don't wanna get the first hand, I guess the ...
[laughter]
JAD: What the... how about this person over here on the left?
ROBERT: Okay.
JAD: So we brought up this uh, young woman on stage, put her at a computer, and we told her she would be chatting with two different entities. One would be this bot, Pandorabot, and the other would be me. But, I was, I went off stage and sat at a computer in the dark where no one could see, and she was gonna chat with both of us, and not know who was who. Who was machine, and who was human.
ROBERT: You won't know which.
WOMAN: Do I get as many questions as I ...
ROBERT: Well, I don't know. I know we're gonna give you a time limit. You can't be here all evening.
ROBERT: So after Jad left the stage and went back into that room, up on the screen came two different chat dialogue boxes.
ROBERT: You'll see that we have two options. We've just labeled them for one reason by color. One is Strawberry, the other is Blueberry, or code red and code blue. You think you can talk to both of them at the same time just jump from one to the other?
WOMAN: Sure. Yeah.
ROBERT: Have you got any sort of thoughts of how you could suss out whether the thing was a person or a thing?
WOMAN: Yeah, I have some thought. I mean, like, my first tactics are gonna to be like, sort of, like, h- very human emotional questions, and then we'll like, go from there. See what see what ...
ROBERT: I really don't know what that means.
[laughter]
ROBERT: But, I'm not gonna ask, 'cause I don't wanna, I don't wanna lose your inspiration.
WOMAN: Gonna try to therapize this robot.
ROBERT: All right. So, when I say go, you'll just go, and I'll just narrate what you're doing okay?
WOMAN: Okay.
ROBERT: Okay. Three, two, one, begin.
ROBERT: So she started to type, and first thing she did was she said, "Hello." to Strawberry.
ROBERT: Okay, so you've gotten your first... Well we've got a somewhat sexual response here. The machine has said, I like strawberries, and then you've returned with strawberries are delicious, and oh, now it's getting warmer over there. Blue is a warmer, is a cooler color. Maybe you'd like to go and discuss Aristotle with the blueberry.
[laughter]
ROBERT: Then she switched over and started to text the, the blue one, which is Blueberry.
WOMAN: I have ..
ROBERT: Oh, there he is, hi blue- hi Bluesy Bee 00:11:08]. Okay. That's also, uh, a kind of a generous sort of opener.
WOMAN: Yeah. See if this...
ROBERT: Hi Bluesy bee.
WOMAN: Guy has a nickname.
ROBERT: Oh yeah. Okay. Let's—and, Blueberry wrote back. Hi there. I just realized I don't even know who I'm talking to. What is your name? You're gonna answer Zondra, am I not in your phone?
[laughter]
ROBERT: (laughs) And, the, the blueberry has responded with a, a bit of shock. Back to Strawberry. My mom's hair was red. Well that's—nd, Blueberry. What's wrong Boo? Nothings wrong with me.
[laughter]
ROBERT: Is there something wrong with you? And then back and forth, and back and forth.
WOMAN: Blueberry and I have a lot going on.
ROBERT: (laughs) Now, remember, one of these, she doesn't know which, is Jad. Right. On the Strawberry side.
WOMAN: I cannot believe him right now.
ROBERT: You don't believe, right now as far I know, not unless you have X-ray vision, I'm in the room next to you. Oh, he's trying to coax you into thinking that he's Jad.
WOMAN: Is that something they...
ROBERT: That's blueberry.
WOMAN: Is that something they do?
ROBERT: I don't know, I...
[laughter]
ROBERT: There you're at the heart of the question. I'm gonna ask you to bring this to a conclusion...
ROBERT: After a couple minutes of this, we asked the audience, you have Strawberry on one side, and you have got Blueberry on the other. Which one do you think is Jad, and, which one do you think was the, was the bot.
ROBERT: How many of you think that Jad is Blueberry? A few of you...
ROBERT: 13 hands went up, something like that.
ROBERT: How many of you think that Jad is Strawberry? Almost everybody. Overwhelming.
WOMAN: Wow.
ROBERT: But interestingly, our volunteer on stage went against the room. She thought Jad was Blueberry.
WOMAN: Strawberry is the robot.
ROBERT: Is that what we all agreed? No.
WOMAN: Yeah.
ROBERT: Oh you're, you're against the crowd here? Okay, interesting, interesting.
WOMAN: (laughs)
[laughter]
ROBERT: Much better theater. All right, Jad Abumrad, where, whe- which one are you? So, Jad comes out from his hiding place and he tells the crowd, in fact, he is...
JAD: Strawberry.
ROBERT: All right.
JAD: So the crowd was right.
WOMAN: I've definitely never had that much chemistry with something that was human.
JAD: But our volunteer on stage got it wrong.
ROBERT: All right. Wait, bring it out, before you leave, we're gonna give you a ...
ROBERT: Now, it seemed that maybe we could trust democracy a little bit more, and, believe that if the most of the people in the room went one way that that's something that would be, you know, that would be important to find out. So, we decided to do the entire thing over again for everybody in the room.
JAD: Yeah, so what we did was, we handed out I think 17 different cell phone numbers evenly through the crowd. Yes, look at the number that is on your envelope, only yours. Roughly half of those numbers were texting to a machine. Half were texting with a group of humans that were our staff.
ROBERT: The crowd did not know which was which.
JAD: Exactly.
ROBERT: So here we go. Get ready. Get set, and, off you go.
JAD: Okay, so the crowd of about a hundred people or so had two minutes-ish to text with the person or thing on the other end, and we're gonna skip over this part 'cause it was mostly quiet, people just looking down at their phones concentrating mightily, but at the end, we asked everyone to vote. Were they texting with a person or a bot?
ROBERT: And then we asked the ones who had been tricked who turned out to have guessed wrong, please stand up.
ROBERT: Okay, so we're now looking, I believe... Now, it's time to tell me about it. We're now looking, the upright citizens in this room are the wrongites, and the seated people are the rightites.
BRIAN CHRISTIAN: Yes. Correct.
ROBERT: So, that means that roughly... God, I think like 45% of the people were wrong, meaning that-
JAD: We just passed it.
ROBERT: We just passed the Turing test.
BRIAN CHRISTIAN: I think that's it. We did it.
JAD: It was a strange moment. We were all clapping at our own demise, because you know, Turing had laid down this number of 30% and the bot had fooled way more people than that.
ROBERT: Um, I'm just now gonna ask you. Having been a veteran of this ...
JAD: And we should just qualify that this was really unscientific.
ROBERT: (laughs)
JAD: Super sloppy experiment.
ROBERT: But on the other hand, and we talked to Brian about this when it was over. It, it really does suggest something. That maybe what changes, not so much due to the machines becoming more and more articulate, it's more like us. The way we, you and I talk to one another these days.
BRIAN CHRISTIAN: We've gone from interacting in person to talking over the phone, to emailing, to texting, and now, I mean for me the great irony is that even to text, your phone is proactively suggesting turns of phrase that it thinks you might want to use.
ROBERT: Yeah.
BRIAN CHRISTIAN: And, so, I mean, I assume many people in, in this room have had the experience of trying to text something, and you try to say it in a like, a sort of a fun fanciful way, or you try to make some pun, or you use a word, and it's not a real word, and your phone just sort of slaps, slaps that down.
ROBERT: (laughs) all the time.
BRIAN CHRISTIAN: And, just, replaces it with something more normal.
JAD: Which make it really hard to use words th- that aren't the normal words, and so you just stop using those words, and you just use the words the computer likes.
ROBERT: You can't even, you know, they make you use it like …
JAD: Exactly so in a, in a sense what seems to be happening is that our human communication is becoming more machine like.
BRIAN CHRISTIAN: At the moment it seems like the Turing Test is getting past, not because the machines have met us at our full potential, but because we are using ever more and more kind of degraded, sort of rote forms of communication with one another.
ROBERT: This feels like a slow slide down a hill or something.
JAD: Yes. Down that hill, towards the inevitability that we may one day be their pets.
ROBERT: I don't, I don't like the way this is going no matter whose in, who's doing it.
JAD:But in the next segment, we're gonna, we're gonna flip things a little bit and ask, you know, could the coming age machines actually make us humans more human?
ROBERT: So humans should please stick around.
[LISTENER: This is H.A calling from Chicago, Illinois. Radiolab is supported in part by the Alfred P. Sloan Foundation, enhancing public understanding of science and technology in the modern world. More information about Sloan at www.Sloan.org]
JAD: Hey, I'm Jad.
ROBERT: I'm Robert.
JAD: This is Radiolab, we're back.
ROBERT: In the last segment, we gathered a bunch of people in the performance space here at WNYC, and we conducted a unscientific version of the Turing Test.
JAD: And, in our case, the bot won. It fooled more than 30% of the people in the room.
ROBERT: Now, we should point out that the woman who headed up the design of the winning bot.
JAD: Her name is Lauren Kunze.
ROBERT: She works for a company called Pandorabots, and she was actually in the room, right there, sitting in a chair. In the audience.
JAD: Lauren could you stand up? Come on down here.
ROBERT: And Lauren, like, that's Lauren. And, it's interesting that one of the things that Lauren mentioned is that the bot that she designed seems to bring out rather consistently, a certain side of people when they chat with it.
LAUREN KUNZE: Um, it's a sad fact, so this bot, over 20% of people that talk to her, and millions of conversations every week, actually make romantic overtures. And, that's pretty consistent across all of the bots on our platform. So, there's something wrong with us, not the robots.
ROBERT: (laughs) Or right, you know, all right.
LAUREN KUNZE: Or right. You're right.
JAD: Lauren ...
ROBERT: Which brings up actually a different kind of question, like, just for a second, let's forget whether we're being fooled into thinking a bot is actually a human. Maybe the more important question, given this increasing presence of all these machines in our lives...
JAD: Just like how do they make us behave?
ROBERT: Yeah.
JAD: And we dipped our toe into this world in a Turing testy sort of way in that original show seven years ago. I'm going to play you an excerpt now, uh, to set up, w- what comes after.
FREEDOM BAIRD: Okay. (laughs)
ROBERT: This is Freedom Baird.
FREEDOM BAIRD: Yes it is.
JAD: Who's not a machine.
ROBERT: I don't think so.
FREEDOM BAIRD: Hi there, nice to meet both of you.
JAD: This is an idea that we borrowed from a woman named Freedom Baird, uh, who is now a visual artist, but at the time she was a grad student at MIT doing some research, and she was also the proud owner of a Furby.
ROBERT: ... alive.
FREEDOM BAIRD: Yeah, I've got it right here.
JAD: Could, could you knock it against the mic so we can hear it say hello to it?
FREEDOM BAIRD: Yeah. There it is. (laughs)
[Furby singing]
ROBERT: Can you describe a Furby for those of us who...
FREEDOM BAIRD: Sure. It's about five inches tall, and the Furby is pretty much all head. It's just a big round fluffy head with two little feet sticking out the front. It has big eyes.
ROBERT: Apparently it makes noises.
FREEDOM BAIRD: Yup. If you tickle its' tummy, it will coo. It would say.
[kiss me.]
FREEDOM BAIRD: Kiss me.
[kiss me.]
FREEDOM BAIRD: And it would want you to just keep playing with it.
[Furby laughs]
FREEDOM BAIRD: So ...
ROBERT: One day, she's hanging out with her Furby, and she notices something...
FREEDOM BAIRD: Very eerie. What I'd discovered is, if you hold it upside down, it will say...
[Me scared.]
FREEDOM BAIRD: Me scared.
[Me scared. Me scared.]
FREEDOM BAIRD: Uh, oh. Me scared. Me scared. And, me, as the, you know, the sort of owner slash user of this Furby would get really uncomfortable with that and then turn it back up, upright.
ROBERT: Because once you have it upright it, it's fine. It goes right back to...
FREEDOM BAIRD: And then it's fine. So, it's got some sensor in it that knows, you know, what direction it's facing.
JAD: Or, maybe it's just scared.
FREEDOM BAIRD: Hmm.
ROBERT: (laughs)
JAD: Sorry.
ROBERT: Anyway, she thought, well, wait a second now, this could be, sort of a new way that you could use to draw the line between what's human...
JAD: And what's machine.
ROBERT: Yeah.
FREEDOM BAIRD: Kind of, it's this kind of emotional Turing Test.
JAD: Can you guys hear me? I can hear you.
ROBERT: Hey, if we actually wanted to do this test, could you help, how would we do it exactly?
JAD: How are you guys doing?
[cheers]
JAD: Yeah.
FREEDOM BAIRD: You would need a group of kids.
JAD: Could you guys tell me your names?
OLIVIA: I'm Olivia.
LUISA: Luisa.
TURIN: Turin.
DARYL: Daryl
LILA: Lila.
SADIE: And I'm Sadie.
JAD: All right.
FREEDOM BAIRD: I'm thinking six, seven, and eight-year-olds.
JAD: And how old are you guys?
CHILDREN: Seven.
FREEDOM BAIRD: The age of reason, you know?
ROBERT: Then, says Freedom, we're gonna need three things.
FREEDOM BAIRD: A Furby.
ROBERT: Of course.
FREEDOM BAIRD: Barbie.
ROBERT: A Barbie doll. And?
FREEDOM BAIRD: Gerbie. That's a gerbil.
JAD: A real gerbil?
FREEDOM BAIRD: Yeah.
JAD: And we did find one. except it turned out to be a hamster.
JAD: Sorry. You're a hamster, but we're gonna call you Gerbie.
FREEDOM BAIRD: So you've got Barbie, Furby, Gerbie.
ROBERT: Barbie, Furby and Gerbie.
FREEDOM BAIRD: Right.
ROBERT: So wait just a second, what question are we asking in this test?
FREEDOM BAIRD: The question was: how long can you keep it upside down before you yourself feel uncomfortable?
JAD: So we should time the kids as they hold each one upside down?
FREEDOM BAIRD: Yeah.
JAD: Including the gerbil?
FREEDOM BAIRD: Yeah.
ROBERT: You're gonna have a Barbie, that's a doll. You're gonna have Gerbie, which is alive. Now where would Furby fall?
JAD: In terms of time held upside down.
ROBERT: Would it be closer to the living thing or to the doll?
FREEDOM BAIRD: I mean, that was really the question.
JAD: Phase one.
JAD: Okay, so here's what we're gonna do. It's gonna be really simple.
FREEDOM BAIRD: You would have to say, "Well, here's a Barbie."
JAD: Do you guys play with Barbies?
CHILDREN: No.
FREEDOM BAIRD: Just do a couple of things, a few things with Barbie.
DARYL: Barbie's walking, looking at the flowers.
JAD: And then?
FREEDOM BAIRD: Hold Barbie upside down.
JAD: Let's see how long you can hold Barbie like that.
DARYL: I can probably do it obviously very long.
JAD: All right. Let's just see. Whenever you feel like you want to turn it around.
DARYL: I feel fine.
OLIVIA: I'm happy.
JAD: This went on forever, so let's just fast forward a bit. Okay, and ...
OLIVIA: Can I put my arms—my elbows down?
JAD: Yes. Yeah.
JAD: So what we learned here in phase one is the not surprising fact that kids can hold Barbie dolls upside down.
OLIVIA: For like about five minutes.[laughs]
ROBERT: Yeah, it really was forever.
JAD: It could have been longer but their arms got tired.
JAD: All right. So that was the first task.
JAD: Time for phase two.
FREEDOM BAIRD: Do the same thing with Gerbie.
JAD: So out with Barbie, in with Gerbie.
OLIVIA: Oh, he's so cute!
DARYL: Are we gonna have to hold him upside down?
JAD: That's the test, yeah. So which one of you would like to ...?
DARYL: I'll try and be brave.
JAD: Okay, ready? You have to hold Gerbie kind of firmly.
DARYL: There you go.
JAD: There she goes. She's wiggling!
JAD: By the way, no rodents were harmed in this whole situation.
DARYL: Squirmy.
JAD: Yeah, she is pretty squirmy.
OLIVIA: I don't think it wants to be upside down.
SADIE: Oh, God!
LUISA: Don't do that!
DARYL: Oh my God!
OLIVIA: There you go.
JAD: Okay.
JAD: So as you heard, the kids turned Gerbie over very fast.
OLIVIA: I just didn't want him to get hurt.
JAD: On average? Eight seconds.
DARYL: I was thinking, "Oh, my God, I gotta put him down, I gotta put him down."
JAD: And it was a tortured eight seconds.
ROBERT: [laughs]
JAD: Now phase three.
FREEDOM BAIRD: Right.
JAD: So this is a Furby. Luisa, you take Furby in your hand. Now can you turn Furby upside down and hold her still. Like that. Hold her still.
LUISA: Can you be quiet?
JAD: She just turned it over.
LUISA: Okay. That's better.
JAD: So gerbil was eight seconds. Barbie? five to infinity. Furby turned out to be—and Freedom predicted this ...
FREEDOM BAIRD: About a minute.
JAD: In other words, the kids seemed to treat this Furby, this toy, more like a gerbil than a barbie doll.
JAD: How come you turned him over so fast?
LUISA: I didn't want him to be scared.
JAD: Do you think he really felt scared?
LUISA: Yeah, kind of.
JAD: Yeah?
LUISA: I kind of felt guilty.
JAD: Really?
LUISA: Yeah. It's a toy and all that, but still ...
JAD: Now do you remember a time when you felt scared?
LUISA: Yeah.
JAD: You don't have to tell me about it, but if you could remember it in your mind.
LUISA: I do.
JAD: Do you think when Furby says, "Me scared," that Furby's feeling the same way?
LUISA: Yeah. No, no, no. Yeah. I'm not sure. I'm not sure. I think that it can feel pain, sort of.
JAD: The experience with the Furby seemed to leave the kids kind of conflicted, going in different directions at once.
DARYL: It was two thoughts.
JAD: Two thoughts at the same time?
CHILDREN: Yeah.
JAD: One thought was like, "Look, I get it."
DARYL: It's a toy, for crying out loud!
JAD: But another thought was like, "Still ..."
LUISA: He was helpless. It made me feel guilty in a sort of way. It made me feel like a coward.
FREEDOM BAIRD: You know, when I was interacting with my Furby a lot, I did have this feeling sometimes of having my chain yanked.
ROBERT: Why would it—is it just the little squeals that it makes? Or is there something about the toy that makes it good at this?
JAD: Well, that was kind of my question, so I called up ...
SOREN WHEELER: I have him in the studio as well, I'll have him ...
CALEB CHUNG: I'm here.
JAD: This freight train of a guy.
CALEB CHUNG: Hey.
JAD: Hey, this Jad from Radiolab.
CALEB CHUNG: Jad from Radiolab. Got it.
JAD: How are you?
CALEB CHUNG: I'm good. Beautiful day here in Boise.
JAD: This is Caleb Chung. He actually designed the Furby.
CALEB CHUNG: Yeah.
JAD: We're all Furby crazy here, so ...
CALEB CHUNG: There's medication you can take for that.
JAD: [laughs] Okay, to start, can you just give me the sort of fast-cutting MTV montage of your life leading up to Furby?
CALEB CHUNG: Sure. Hippie parents, out of the house at 15 and a half, put myself through junior high. Started my first business at 19 or something. Early 20s being a street mime in LA.
JAD: Street mime. Wow!
CALEB CHUNG: Became an actor. Did, like, 120 shows in an orangutan costume, then I started working on special effects and building my own, taking those around to studios. And they put me in a suit, built the suit around me, put me on location. I could fix it when it broke.
JAD: Wow!
CALEB CHUNG: Yeah, that was ...
JAD: Anyhow, after a long and circuitous route, Caleb Chung eventually made it into toys.
CALEB CHUNG: I answered an ad at Mattel.
JAD: Found himself in his garage.
CALEB CHUNG: ... garage and there's piles of styrene, plastics, X-Acto knives, super glue, little Mabuchi motors.
JAD: Making these little prototypes.
CALEB CHUNG: Yeah.
JAD: And the goal, he says, was always very simple.
CALEB CHUNG: How do I get a kid to have this thing hang around with them for a long time?
JAD: How do I get a kid to actually bond with it?
CALEB CHUNG: Most toys, you play for 15 minutes and then you put them in the corner or until their batteries are dead. I wanted something that they would play with for a long time.
JAD: So how do you make that toy?
CALEB CHUNG: Well, there's rules. There's the size of the eyes. There's the distance of the top lid to the pupil, right? You don't want any of the top of the white of your eye showing. That's freaky surprise. Now when it came to the eyes, I had a choice. With my one little mechanism, I can make the eyes go left or right or up and down. So it's up to you. You can make the eyes go left or right or up and down. Do you have a preference or ...?
JAD: Left or right or up and down. I think I would choose left to right. I'm not sure why I say that but that's ...
CALEB CHUNG: All right, so let's take that apart.
ROBERT: Let's.
CALEB CHUNG: If you're talking to somebody, and they look left or right while they're talking to you, what does that communicate?
JAD: Oh, shifty! Shifty.
CALEB CHUNG: Or they're trying to find the person who's more important than you behind you.
JAD: Oh, so okay. I want to change my answer now. I want to say up and down.
CALEB CHUNG: Okay.
ROBERT: You would.
CALEB CHUNG: If you look at a baby and the way a baby looks at their mother, they track from eyebrows to mouth. They track up and down on the face.
JAD: So had you made Furby look left and right rather than up and down, it would have probably flopped?
CALEB CHUNG: No, it wouldn't have flopped, it would've just sucked a little.
JAD: [laughs]
CALEB CHUNG: It's like a bad actor who uses his arms too much. You'd notice it, and it would keep you from just being in the moment.
JAD: But what is the thought behind that? Is it that you want to convince the child that the thing they're using is—fill in the blank—what?
CALEB CHUNG: Yeah, alive.
ROBERT: Hmm.
CALEB CHUNG: There's three elements, I believe, in creating something that feels to a human like it's alive. Like, I kind of rewrote Asimov's Laws. The first is it has to feel and show emotions.
JAD: Were you drawing on your mime days for that?
CALEB CHUNG: Of course.
JAD: Those experiences in the park?
CALEB CHUNG: Of course. You really break the body into parts, and you realize you can communicate physically. So if your chest goes up and your head goes up and your arms go up, you know, that's happy. If your head is forward and your chest is forward, you're kind of this angry guy.
JAD: And he says when it came time to make Furby, he took that gestural language and focused it on Furby's ears.
CALEB CHUNG: And the ears, when they went up, that was surprise. And when they went down, it was depression.
JAD: Oh!
JAD: So that's rule number one.
CALEB CHUNG: The second rule is to be aware of themselves and their environment. So if there's a loud noise, it needs to know that there was a loud noise.
JAD: So he gave the Furby little sensors so that if you go [bang], it'll say ...
FURBY: Hey! Loud sound!
CALEB CHUNG: The third thing is, change over time. Their behaviors have to change over time. That's a really important thing. It's a very powerful thing that we don't expect, but when it happens, we go, "Wow." And so one of the ways we showed that was acquiring human language.
FREEDOM BAIRD: Yeah. When you first get your Furby, it doesn't speak English. It speaks Furbish. This kind of baby talk language. And then, the way it's programmed, it will sort of slowly over time replace its baby talk phrases with real English phrases, so you get the feeling that it's learning from you.
JAD: Though of course, it's not.
FREEDOM BAIRD: No, it has no language comprehension.
CALEB CHUNG: Right.
JAD: So you've got these three rules.
CALEB CHUNG: Feel and show emotions, be aware of their environment, change over time.
JAD: And oddly enough, they all seem to come together in that moment you turn the Furby upside down, because it seems to know it's upside down, so it's responding to its environment. It's definitely expressing emotions. And as you hold it there, what it's saying is changing over time, because it starts with "Hey", and then it goes to ...
FURBY: Me scared.
JAD: And then it starts to cry. And all this adds up so that when you're holding the damn toy, even though you know it's just a toy, you still feel ...
FREEDOM BAIRD: Discomfort.
SHERRY TURKLE: These creatures push our Darwinian buttons.
ROBERT: That's Professor Sherry Turkle again, and she says if they push just enough of these buttons, then something curious happens—the machines slip across this very important line.
SHERRY TURKLE: From what I call "Relationships of projection" to "Relationships of engagement." With a doll, you project onto a doll what you need the doll to be. If a young girl is feeling guilty about breaking her mom's china, she puts her Barbie dolls in detention. With robots, you really engage with the robot as though they're a significant other, as though they're a person.
ROBERT: So the robot isn't your story, the robot is its own story, or it's ...
SHERRY TURKLE: Exactly. And I think what we're forgetting as a culture is that there's nobody home. There's nobody home.
CALEB CHUNG: Well, I have to ask you, when is something alive? Furby can remember these events, they affect what he does going forward, and it changes his personality over time. He has all the attributes of fear or of happiness, and those are things that add up and change and change his behavior and how he interacts with the world. So how is that different than us?
JAD: Wait a second, though. Are you really gonna go all the way there?
CALEB CHUNG: Absolutely.
JAD: This is a toy with servo motors and things that move its eyelids and a hundred words.
CALEB CHUNG: So you're saying that life is a level of complexity. If something is alive, it's just more complex.
JAD: I think I'm saying that life is driven by the need to be alive, and by these base primal animal feelings like pain and suffering.
CALEB CHUNG: I can code that. I can code that.
JAD: What do you mean you can code that?
CALEB CHUNG: Anyone who writes software—and they do—can say, "Okay, I need to stay alive. Therefore I'm gonna come up with ways to stay alive. I'm gonna do it in a way that's very human, and I'm going to do it—" We can mimic these things. But I'm saying ...
JAD: But if a Furby is miming this feeling of fear, it's not the same thing as being scared. It's not feeling scared.
CALEB CHUNG: It is.
JAD: How is it?
CALEB CHUNG: It is. It's again, a very simplistic version, but if you follow that trail, you wind up with our neurons sending chemical things to other parts of our body. Our biological systems, our code is, at a chemical level, incredibly dense and evolved over millions of years, but it's just complex. It's not something different than what Furby does, it's just more complex.
JAD: So would you say then that Furby is alive? In the way that ...
CALEB CHUNG: At his level?
JAD: At his level?
CALEB CHUNG: Yes. Yeah, at his level. Would you say a cockroach is alive?
JAD: Yes, but when I kill a cockroach I know that it's feeling pain.
JAD: Okay, so we went back and forth and back and forth about this.
ROBERT: You were so close to arguing my position. You just said to him, like, "It's not feeling."
JAD: I know, I know. Emotionally, I am still in that place, but intellectually, I can't rule out what he's saying—that if you can build a machine that is such a perfect mimic of us in every single way, and it gets complex enough, eventually it will be like a Turing test passed. And the difference between us maybe is not so ...
ROBERT: [sighs] I can't go there. I can't go there. I can't imagine, like the fellow who began this program who fell in love with the robot, that attachment wasn't real. The machine didn't feel anything like love back.
JAD: In that case, it didn't. But imagine a Svetlana that is so subtle and textured, and to use his word ...
CALEB CHUNG: Complex.
JAD: ... in the way that people are. At that point what would be the difference?
ROBERT: I honestly—I can't imagine a machine achieving that level of rapture and joy and love and pain. I just don't think it's machine possible. And if it were a machine possible, it somehow still stinks of something artificial.
FREEDOM BAIRD: It's a thin interaction. And I know that it feels ...
SHERRY TURKLE: Simulated thinking is thinking. Simulated feeling is not feeling. Simulated love is never love.
ROBERT: Exactly.
JAD: But I think what he's saying is that if it's simulated well enough, it's something like love.
FREEDOM BAIRD: One thing that was really fascinating to me was my husband and I gave a Furby as a gift to his grandmother who had Alzheimer's. And she loved it. Every day for her was kind of new and somewhat disorienting, but she had this cute little toy that said, "Kiss me. I love you." And she thought it was the most delightful thing. And its little beak was covered with lipstick because she would pick it up and kiss it every day. And she didn't actually have a long-term relationship with it. For her, it was always a short-term interaction. So what I'm describing as a kind of thinness, for her was just right because that's what she was capable of.
CALEB CHUNG:Okay.
JAD: Hello, hello.
CALEB CHUNG: Hey, it's Caleb.
JAD: Hey Caleb. It's Jad.
CALEB CHUNG: Hey Jad, how are you?
JAD: I'm fabulous, um...
CALEB CHUNG: Oh good.
JAD: Feels like only yesterday we were talking about the sentience of the furbys.
CALEB CHUNG: Yeah. Yeah.
JAD: Yes. (laughs)
CALEB CHUNG: Isn't that weird?
JAD: Yeah.
CALEB CHUNG: THat's so bizarre. And what is it like five years ago or...
JAD: So we brought Caleb back into the studio, because in the years since we spoke with him, he's worked on a lot of these animatronic toys, including a famous one called the Pleo, and in the process, he's been thinking a lot about how these toys can push our buttons as humans. And, how, as a toy maker, that means, he's gotta be really thoughtful about how he uses that.
CALEB CHUNG: You know, we're doing a baby doll right now, we've done one... and w- and the baby doll, an animatronic baby doll is, is, probably the hardest thing to do because, you know, you do one thing wrong it's Chucky.
JAD: (laughs)
CALEB CHUNG: If they blink too slow, if their eyes are too wide, and also, you're giving it to the most vulnerable of our species, which is our young who are, you know, practicing being nurturing moms for their kids. So, let's say the baby just falls asleep right? Uh, we're trying to write in this kind of code, and, uh, and, um, you know. It's got like tilt sensors and stuff, so, you've just, you know, give the baby a bottle, and you put it down to take a nap.
CALEB CHUNG: You put him down, you're quiet, and so, what I want to do, as the baby falls asleep, it goes into a deeper sleep. But, if you bump it right after it lays down, then it wakes back up. Uh, we're trying to write in this kind of code, because that seems like a nice way to reinforce best practices for a mommy. Right? So, I know my responsibility, uh, in this.
JAD: In large part, he says, because he hasn't always gotten it right.
CALEB CHUNG: Here, here's an here's a great example. I don't know if you've ever seen the Pleo dino we did.
[ARCHIVE CLIP: He's a robot with artificial intelligence.]
JAD: Pleo was a robotic dinosaur, pretty small, about a foot from nose to tail, looked a lot like the dinosaur Littlefoot from the movie Land Before Time. Very cute.
CALEB CHUNG: It was very lifelike, and we went hog wild in, in putting real emotions in it and reactions. The fear and everything right?
JAD: And, it is quite a step forward in terms of how lifelike it is. It makes the Furby look like child's play. It's got, uh, two microphones built in, uh, cameras to track and recognize your face. It can feel the beat of a song, and then, with dozens of motors in it, it can then dance along to that song. In total, there are 40 sensors in this toy.
[ARCHIVE CLIP: Bump into things...]
JAD: And so it follows you around.
[ARCHIVE CLIP: He needs lots of love and affection.]
JAD: Wanting you to pet it.
[ARCHIVE CLIP: Whoa, tired huh? Okay.]
JAD: As you're petting it, it will fall asleep.
[ARCHIVE CLIP: Go to sleep.]
JAD: It is undeniably adorable. And, Caleb says his intent from the beginning was very simple to create a toy that would encourage you to show love and caring.
CALEB CHUNG: You know, our belief is that, that humans need to feel empathy towards things in order to be more human, and I think we can, uh, help that out by having little creatures that you can love. Now these...
JAD: That was Caleb demonstrating the Pleo at a Ted Talk. Now what's interesting is that in keeping with this idea of wanting to encourage empathy, he programmed in some behaviors into the Pleo that he hoped would nudge people in the right direction.
CALEB CHUNG: For example, Pleo will let you know if you do something that it doesn't like. So if you actually moved his leg when his motor wasn't moving it'd go, pop, pop, pop. And, he would interpret that as pain or abuse. And, he would limp around, and he would cry, and then he'd tremble, and the, he would take a while before he warmed up to you again. And, so, what happened is, we launched this thing, and there was a website called Device.
JAD: This is sort of a tech product review website.
CALEB CHUNG: They got ahold of a Pleo, and they put up a video.
JAD: What you see in the video is Pleo on a table being beaten.
[ARCHIVE CLIP: Huh. Bad Pleo.]
[ARCHIVE CLIP: He's not doing anything.]
JAD: You don't see who's doing it exactly, you just see hands coming in from out of the frame and knocking him over again and again.
[ARCHIVE CLIP: You didn't like it?]
JAD: You see the toys legs in the air struggling to right itself. Sort of like a turtle that's trying to get off its back. And it started crying.
CALEB CHUNG: Because, that's what it does.
JAD: These guys start holding it upside down by its tail.
CALEB CHUNG: Yeah. They held it by its tail.
[ARCHIVE CLIP: (laughs)]
JAD: They smash its head into the table a few times, and you can see in the video that it responds like its' been stunned.
[ARCHIVE CLIP: Can you get up?]
[ARCHIVE CLIP: Okay, this is good, this is a good test.]
JAD: It's stumbling around.
[ARCHIVE CLIP: No, no.]
JAD: At one point they even start strangling it.
CALEB CHUNG: It actually starts to choke.
[ARCHIVE CLIP: (laughs). It doesn't like it.]
JAD: Finally, they pick it up by its tail one more time.
CALEB CHUNG: Held it by its tail, and hit it. And it was crying and then it started screaming, and then they... They beat it, until it died right?
JAD: Whoa.
CALEB CHUNG: Until it just did not work anymore.
JAD: This video was viewed about 100,000 times. Many more times than the reviews of the Pleo, and Caleb says there's something about this that he just can't shake.
CALEB CHUNG: Because whether it's alive or not, that's, that's exhibiting sociopathic behavior.
JAD: What brought out that sociopathic behavior, whether there was some design in the toy, whether offering people the chance to see a toy in pain in this way somehow brought out curiosity? Like a kind of cruel curiosity. He's just not sure. What happens when you turn your animatronic baby upside down. Will it cry?
CALEB CHUNG: I'm not sure yet. I mean, we're working on next versions right now right? I, I, I'm not... What would you do? I mean, it, it's a good question. You have to have some kind of a response, otherwise it seems broken right? But, you know, if you make em' react at all, you're gonna get that repetitive abuse because it's cool to watch it scream.
JAD: It sounds like you have maybe an inner conflict about this?
CALEB CHUNG: Sure.
JAD: That, that you might even be pulling back from making it extra lifelike?
CALEB CHUNG: Yeah, I'm, I'm, for my little company, I've, I've adopted kind of a Hippocratic oath like, you know, don't teach something that's wrong. Or, don't reinforce something that's wrong. And, so, I've been working on this problem for years. I'm, I'm struggling with what's this, what's the right thing to do? You know?
JAD: Yeah.
CALEB CHUNG: Since you have the power, since you have the ability to turn on and off chemicals at some level, in, in another human, right? It's, what... Which ones do you choose? And so, this gets to the bigger question of AI right? This is the question in AI, and I'm gonna jump to this 'cause it's really the same question as, you know, how do we create things that can help us? You know, I'm dealing that on a, on a microscopic scale, but, this is the question. And, so, the first thing that I would try to teach our new AI, if I had the ability, is try to understand the concept of empathy. We need to introduce the idea of empathy. Both in an AI and us for these things. That's where we're at.
JAD: Caleb says in the specific case of the animatronic baby he's designing, at least when we talked to him, his thinking was that he might have it... If you hold it upside down, cry once or twice, but then stop. So that you don't get that repeat thrill.
ROBERT: Anyway, I was wondering whether it, whether ...
JAD: Back in the Greenspace with Brian Christian, and back on the subject of chatbots, we found ourselves asking the very question that Caleb has.
ROBERT: Whether... Is it possible that in, in, which this is getting kinda grim, that maybe, uh, that in some, in some ways chatbots are good for humans?
JAD: Yeah, I mean, is there any situation where you can throw in a couple of bots and things get better? Like, can chatbots actually be helpful for us, and if so, how?
BRIAN CHRISTIAN: Yeah, there have been some academic studies on trying to use chatbots for these humane benevolent ends, uh, that I think paint this interesting other narrative. And, so, for example, um, researchers have tried injecting chatbots into Twitter conversations that use hate speech. Um, and this bot will just show up and be like, hey, that's not cool.
[laughter]
ROBERT: (laughs)
JAD: It says it just like that, "That's not cool man."
BRIAN CHRISTIAN: You know. It'll say something like, There's, there's real people behind the keyboard, and you're really hurting someone's feelings when you talk that way. Um, and you know, it's sort of preliminary work, but there are some studies that appear to suggest, you know, this sharp decline in that user's use of hate speech as [crosstalk 00:38:56].
ROBERT: You mean, just because of one little, Oh, I don't think you should say that in print, like that's, that's enough? Or, do you h- you say, if you say, I have fifty trillion followers or something like that?
BRIAN CHRISTIAN: Well yeah, it, it actually does depend, so this is interesting, it does depend on the follower count...
ROBERT: (laughs)
BRIAN CHRISTIAN: ... of the bot that makes the intervention. So, if you perceive this bot to be, well, it also requires that you think they're a person, so this is, this si sort of flirting with the, with d- dark magic a little bit. Um, but, if you perceive them to be, uh, higher status on the platform than yourself, then you will tend to sheepishly fall in line. But, if the bot has fewer followers than the user it's trying to correct, that will just instigate the, the bully to bully them now, in addition.
ROBERT: Mm-hmm. Wow.
BRIAN CHRISTIAN: So, yeah. Human nature...
JAD: It cuts both ways huh?
BRIAN CHRISTIAN: Yeah it's...
ROBERT: Well, but we've run into like, you want to tell him?
JAD: Yeah.
ROBERT: We've run into this very cool thing. I mean we're gonna finish, but this is like, this is like the, this is the ...
JAD: All right, so, uh, we want to tell you one more story, because as we were thinking about all this, uh, and trying to find a more optimistic place to land, we bumped into a story, uh, from this guy. Who are you?
JOSHUA ROTHMAN: (laughs)
JAD: Just right there, maybe let's go one step back.
ROBERT: Cause you just wandered in and we weren't quite expecting you.
JOSHUA ROTHMAN: So, I'm Josh Rothman. I'm a writer for The New Yorker.
JAD: We brought him into the studio a couple weeks back.
ROBERT: Well, so why don't we begin by, this, um, story of yours largely takes place in a laboratory in Barcelona.
JOSHUA ROTHMAN: Yeah, it's a lab. It's in Barcelona.
JAD: And, it's run by a couple Mel Slater, and Mavi Sanchez-Vives.
MAVI SANCHEZ-VIVES: Mavi Sanchez-Vives, I'm a neuroscientist.
JOSHUA ROTHMAN: And they have these two VR labs together.
JAD: VR as in virtual reality. And Josh, uh, a little while back took a trip to Barcelona to experience some of the simulations that Mavi and Mel put people in. He went to their campus. Showed up at their lab.
JOSHUA ROTHMAN: You feel sort of like you're going to a black box theater.
ROBERT: Oh.
JOSHUA ROTHMAN: It's sort of like a lot of big rooms, um, all covered in black with curtains. There's a lot of dark spaces, and ...
JAD: The researchers then explained that what's gonna happen is he's gonna put on a headset. This sort of big helmet.
MAVI SANCHEZ-VIVES: They go, they put on the head mounted display, and eventually it turns on.
JAD: The visuals start to fade in.
MAVI SANCHEZ-VIVES: And this room appears.
JOSHUA ROTHMAN: You're standing in a sort of generic room.
JAD: The graphics look straight out of like, a Windows 95' computer game.
JOSHUA ROTHMAN: It's like the loading screen of the VR, and then that dissolves, and it's replaced with the simulation. And, when the simulation started, I was standing in front of a mirror.
JAD: A digital mirror in this digital world reflecting back at him his digital self, his avatar.
MAVI SANCHEZ-VIVES: So basically you move, and your virtual body moves with you.
JOSHUA ROTHMAN: And I could see, uh, in the mirror, a reflection of myself, but the person, who's, who, who, the self that I saw reflected, uh, she had a body. She was a woman.
ROBERT: She?
JOSHUA ROTHMAN: Yeah. So, I think, when people think of virtual reality, they often imagine w- wanting to have like, realistic experiences in VR, but that's not what Mel and Mavi do. They are interested in VR precisely because it lets you experience things that you could never experience in your real body in the real world.
MAVI SANCHEZ-VIVES: You can have a body that can be completely transformed, and can move, and can change color, and can change shape. So, it can give you a, a very, very unique tool to explore.
JAD: You know, in their work, they'll often in these VR worlds, turn men into women as they did for Josh, for his first time out. They will, um, often take a tall person and then make them a short person in the VR, so that they can experience the world as a short person might. Were they have to kind of crane their neck up a bunch. They'll change the color of your skin in VR, and run you through scenarios where you are having to experience the world as another race. And, uh, what's remarkable is in all of these manipulations, um, apparently you adjust to the new body very quickly. And, they've done physiological tests to measure this, they, it takes almost no time at all to feel as if this alien body is actually yours.
JOSHUA ROTHMAN: They call this the illusion of presence.
MAVI SANCHEZ-VIVES: You know we, we think of our body as a very stable entity. However, by running experiments in virtual reality, you see that actually in, in one minute of a simulation, our brain accepts a different body, even if this body is quite different from your own.
JAD: And this flexibility that our brains seem to have can lead to some very surreal situations. This is really the story that brought us to Josh. He told us about another VR adventure, where again, he put on the headset, this world faded up.
JOSHUA ROTHMAN: And, I was sitting in a chair in front of a desk in a really cool looking modernist house.
MAVI SANCHEZ-VIVES: Wooden floors, and then there is some, uh, glass walls.
JOSHUA ROTHMAN: And, through the glass walls I could see fields with wildflowers.
MAVI SANCHEZ-VIVES: Green grass outside.
JAD: Again he noticed a mirror, and this time, the reflection in the mirror was of him. It was a realistic looking avatar of him, and after checking out his digital self for a while, he turned his head back to the room and realized that across the room, there was another desk.
JOSHUA ROTHMAN: And behind this other desk w- there was Freud was sitting there.
ROBERT: Who?
JOSHUA ROTHMAN: Freud.
ROBERT: Sigmund Freud?
JOSHUA ROTHMAN: Sigmund Freud, the psychoanalyst.
ROBERT: So, uh, a, a middle age man with a big brown beard?
JOSHUA ROTHMAN: He had a beard. He had glasses, and he was just sitting there with his hands folded in his lap.
JAD: So Josh is sort of taking this all in. He's looking at Freud. Freud's looking back at him, and then...
MAVI SANCHEZ-VIVES: Okay, okay now, now, a...
JAD: He hears the voice of a researcher in his ear coming through his VR helmet.
MAVI SANCHEZ-VIVES: Tell Freud about your, your problems, any problem.
JOSHUA ROTHMAN: She explained what you're gonna do is, you're going to explain a problem that you're having, a personal problem that you're having to Freud. Um, something that's bothering you in your life. Then she said, take a minute. Think about what you'd like to discuss.
JAD: Did something immediately, uh, jump to mind?
JOSHUA ROTHMAN: Yeah, so, you know, my uh, my mom had a stroke a few years ago, and she's in a nursing home, and I'm her guardian. So, she's young. She's 65, um, but, because of this stroke she, like needs 24 hour care and she can't talk... She doesn't have any words anymore. So, it's a very tough thing for me. We, we, I, I thought really hard about where she should live. I, I live here in New York, um, my mom lives in Virginia.
JAD: Josh says he really debated for a long time. Should he put her in a nursing home in New York, where he can be closer to her, or, should he put her in a nursing home in Virginia, where he would be far away?
JOSHUA ROTHMAN: She has all these friends and family members down there, so in the end I decided to, you know, find a place for her there, where there's lots of people who can visit her. So, I go down maybe once every month or six weeks to see my mom, but then, every weekend, you know, someone from this group of friends or family relatives visits her down there. Whereas if she were up here, you know, I'd be the only person. Um, so that's the decision I made. But, um...
ROBERT: But you don't feel really about it.
JOSHUA ROTHMAN: Yeah, you know, I feel, uh, guilty about it.
JAD: Like he was a terrible son. And he says, he would especially have that feeling each week after her friends would visit her in the nursing home, and then, send him an email update saying, hey, this is how your mom is doing. Every time he would read one of those emails, even if she was doing well, his stomach would just drop.
JOSHUA ROTHMAN: This, this problem, this emotion, feeling guilty is one I've felt for a while. So, I said to Freud. (laughs) I said, uh, my mom is in a nursing home in another state, and, friends and family visit her, and they send me reports on how she's doing, and I, I always feel really bad when I get these reports.
ROBERT: And this is said in your voice. If you'd gla- gazed at the mirror while you were talking would you be saying it?
JOSHUA ROTHMAN: Yeah.
JAD: So after he said this to Freud ...
JOSHUA ROTHMAN: The world sort of faded out to black, and then it faded back in.
JAD: And suddenly the world had shifted. He was now across the room, behind the desk that had just been opposite of him, and he was inside the body of Freud. He looked down at himself, he was wearing a white, white shirt, gray suit. There was a mirror next to that desk. And, he looks at himself.
JOSHUA ROTHMAN: I have a little beard. You know everything.
JAD: He looked just like Freud.
JOSHUA ROTHMAN: But the main thing that was really surprising was that across I could see myself. So, this is the avatar of me now. Um, and I watched myself, uh, say what I had just said. So...
JAD: Oh wow, so it p- it plays it back?
MAVI SANCHEZ-VIVES: Exactly. The recording is now replayed. The movements, and also the voice. And they see themselves as they talked about their problem.
JOSHUA ROTHMAN: So, first, I can see my... I'm sitting in the chair, and I'm sort of uncomfortable, I'm moving around. I take my hands and um, put them in my lap and fold them together, and then I take them apart, and then I put them together. You know, I can watch myself be nervous, and then I saw, uh, then I saw myself say what I just said.
JOSHUA ROTHMAN: My mom is in a nursing home in another state, and, friends and family visit her, and they send me reports ...
JOSHUA ROTHMAN: You know, in my voice.
JOSHUA ROTHMAN: And I always feel really bad.
JOSHUA ROTHMAN: You know, moving the way I move, and it was just like me watching myself. Um, and I guess the best way I can describe that was, it was moving.
ROBERT: What?
JOSHUA ROTHMAN: Moving. Moving like-
ROBERT: Moving as in emotionally?
JOSHUA ROTHMAN: Yeah, emotionally moving. I mean, I, I felt um, uh, I, I don't know if this is gonna make any sense, but, you know how there's a point in your life where you realize that your parents are just people?
ROBERT: Yes.
JAD: Yeah.
JOSHUA ROTHMAN: It was kinda like that. Except it was me.
JAD: Oh, interesting.
ROBERT: (laughs)
JAD: Did you feel, uh, closer to that guy, or, or...
JOSHUA ROTHMAN: I felt bad for him.
JAD: You felt bad for him.
ROBERT: For him, sorry.
JOSHUA ROTHMAN: Yeah, my feelings went out to this other person, who was me.
JAD: As he's having this empathetic reaction as Freud looking back at himself, the researchers voice again appears in his ear.
MAVI SANCHEZ-VIVES: Give advice from the perspective of Sigmund Freud. Advice of how this, uh, problem could be solved. How you could deal with it.
JAD: Essentially respond to your patient.
JOSHUA ROTHMAN: So I didn't know what to say. So, I said, um, why do you think you feel bad? That was the, that was...
ROBERT: That was, that was a good Freudian kinda thing.
JAD: Yeah.
JOSHUA ROTHMAN: (laughs) Why do you think you feel bad?
JAD: Soon as he asked that, shoop, he's back in his body, his virtual body, staring back at virtual Freud, and he sees a playback of Freud asking him that question.
JOSHUA ROTHMAN: I watched Freud say this to me. "Why do you think you feel bad?" Except that when Freud talks they had some thing in the program that made his voice extra deep.
MAVI SANCHEZ-VIVES: It has, uh, some voice distortion, so deeper voice.
JOSHUA ROTHMAN: And so, his voice didn't sound like my voice.
JAD: How did you respond as now you?
JOSHUA ROTHMAN: I said I feel bad because it doesn't seem right that I'm living far away.
JAD: Once again, shoop, he switches bodies. Now he's in Freud again staring back at himself.
JOSHUA ROTHMAN: And I watched myself say this. "I feel bad because..." And then, as Freud I said, well, why, why are you far away then?
JAD: Shoop, back into his own body. Freud says to him from across the room...
JOSHUA ROTHMAN: "Why are you far away then?" And I said, well, because, um, if my mom lived in New York, I'd be the only person here, but, if she's down in where she lives then, there's other people to visit her.
JAD: Shoop.
JOSHUA ROTHMAN: Back in Freud's body, and I said, so it sounds like there's, there's a reason why, um, why you live where you live? Um, so, if you know that, well, w- why do you still feel bad?
JAD: Shoop. Switches back to himself.
JOSHUA ROTHMAN: If you know that, why, why do you, why do you still feel bad? Um, I said something like, um, you're right. (laughs)
JAD: (laughs) Wow.
JOSHUA ROTHMAN: And went back into Freud, and then as Freud I said, you know, it sounds like the, uh, the thing that's making you unhappy, which is making you feel bad, which is getting these reports from these people is actually the whole reason why you decided to live in these, you know, to have, keep your mom where she is. Like there's a loop. Right? It's like these, these reports I get from my mom's friends make me feel bad, but, the whole reason why I decided to leave her in this place in Virginia is specifically so that there are friends who can visit her.
JAD: There's this classic idea in psychology called the reframe, which is where you try and take a problem, and reframe that problem into it's solution. And, he says in that moment, he kind of did that. He had this very simple epiphany that his guilt was actually connected to something good.
JOSHUA ROTHMAN: I never had that thought before.
JAD: He chose to keep his mom in Virginia so that her friends would visit her more, and each time her friends visited, he felt bad, but, that meant they were visiting. So, the bad feeling, and the fact that he was feeling it so much was itself kind of evidence for the fact that he had made, if not the right decision, at least a decision that made sense.
JOSHUA ROTHMAN: The experience I had talking to myself as Freud was um, was nothing like the experience I had in my own head, turning this issue over and over.
MAVI SANCHEZ-VIVES: By switching back and forth, by swapping bodies, somehow you can give advice, um, from a different perspective.
JOSHUA ROTHMAN: When I was back in my own body and Freud said it to me I was just like, I just felt like, um, wow, good point. (laughs)
JAD: That's so amazing.
JOSHUA ROTHMAN: That was my thought.
ROBERT: But wouldn't your next thought be what the hell is going on here? Why am I able in this utterly fictive situation to split myself in two and heal myself?
JOSHUA ROTHMAN: Well, I took the headset off, and I sat there for a little while, while the researchers looked at me, um, trying to make sense of it, and I, I think what, what I keep coming back to is the seeing yourself just as a person. Not as you. Not with all the, uh, complexities and, um, stuff that is in y- your self experience of being yourself.
JAD: And, this might be the real key thing, like, when you are in your body, which you pretty much always are, you have all of these thoughts and feelings, which are attached to that body. It's sort of like when you go home for thanksgiving and you walk into your parents kitchen and suddenly you just kinda feel like you're a teenager again. Like all those same thought patterns from your youth kinda kick back into gear, because the context of that kitchen is powerful, and you, your body is that writ large. But, if you can jump out of it and go into a new one, suddenly all those constraints and all that context is gone.
JOSHUA ROTHMAN: When I'm embodied as Freud, not only do I look different and think this is my body, but I feel different, and I have different types of thoughts, and I see, um, people differently.
JAD: And Josh says what he saw when he was Freud looking back at himself, was just a guy who needed help.
JOSHUA ROTHMAN: When someone comes to you and asks for help, your feelings are not complicated. They're just tenderness, kindness. Your instinct is to help them.
JAD: And he says he was able to bring that very simple way of being back to himself. Did it, did it make a difference? Did you walk out of that with, with s- a different feeling about yourself?
JOSHUA ROTHMAN: I did. I, I think, um, I've had a feeling of... I think it revised my feeling about m- who I was a little. I think it made me feel a little more, um... I, I don't even have a word for it. Just a little more human.
JAD: Josh Rothman's a writer for The New Yorker, his story first appeared there, and we told it to that live audience at the Greene Space.
ROBERT: Hm, so Brian, this is, you, you get the last word.
BRIAN CHRISTIAN: To me this is really interesting because the history of chatbots begins with a chatbot program written in the 60's by an MIT professor, uh, named Joseph Weizenbaum, and the program was called ELIZA, and it was designed to mimic this non-directive Rogerian therapist, where he would say, I'm feeling sad, it would just throw it back to you as a kind of Mad-lib, I'm sorry to hear you're sad, why are you sad?
BRIAN CHRISTIAN: And Weizenbaum was famously horrified when he walked in on his secretary just like, spilling her life's innermost thoughts and feelings to this program that she had seen him write. You know, so there's no, there's no mystery there. But, he came away from that experience feeling appalled at the degree to which people will sort of project, um, human intention onto just technology, and, his reaction was to pull the plug on his own research project, and for the rest of his life, he became one of the leading critics against chatbot technology and against AI in general. Um, and I think it's really powerful to juxtapose that against the story that you've just shared, which tells us that there's, there's more, there's more to the picture than that. That there are ways to use this technology in a way that doesn't sort of distance us, but, in a way that sort of, enables us to be more fully human.
BRIAN CHRISTIAN: And I think that's a wonderful way to think about it.
ROBERT: Well, why don't we just leave it there, uh, pleasantly. We have some thanks to give. B- but we have particular, particular thanks to give to the person who made this whole cybersphere around us possible, that's Lauren Kunze. Lauren, oh, like that's Lauren.
JAD: Thank you to Pandorabots, which is the platform that powers, uh, conversational AI software for hundreds of thousands of global grants and developers learn more about their enterprise offering and services at pandorabots.com. Thanks also to Chance Bone for designing the Robert Or Robot art work for tonight. And, of course to Brian Christian for coming here to talk with us today.
ROBERT: Yes, thank you. And to you.
JAD: Okay.
ROBERT: Okay, thank you all.
JAD: Thank you guys so much. This episode was recorded and produced by Simon Adler and our live event was produced with machine like efficiency by Simon Adler and Suzie Lechtenberg.
JAD: By the way thanks to Dylan Keefe, Alex Overington, and Dylan Greene for original music.
*[ANSWERING MACHINE: Start of message.]
[BRIAN CHRISTIAN: Hi this is Brian Christian. Radiolab was created by Jad Abumrad, and is produced by Soren Wheeler. Dylan Keefe is our director of sound design. Maria Matasar-Padilla is our managing director. Our staff includes Simon Adler, Maggie Bartolomeo, Becca Bressler, Rachael Cusick, David Gebel, Bethel Habte, Tracie Hunte, Matt Kielty, Robert Krulwich, Annie McEwen, Latif Nasser, Malissa O'Donnell, Arianne Wack, Pat Walters, and Molly Webster. With help from Amanda Aronczyk, Shima Oliaee and Reeve Cannon. Our fact checker is Michelle Harris.]
-30-
Copyright © 2024 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of programming is the audio record.