
Jun 1, 2011
Transcript
[RADIOLAB INTRO]
ROBERT EPSTEIN: Hi there.
JAD ABUMRAD: We're gonna start today's program with a fellow named Robert.
JAD: Is it Epstein or Epstein?
ROBERT EPSTEIN: Just think Einstein with an "Ep."
JAD: Okay. [laughs]
ROBERT KRULWICH: That would make it Epstein, I guess.
ROBERT EPSTEIN: That's right.
JAD: And where are we reaching you right now?
ROBERT EPSTEIN: I am in San Diego area.
JAD: Robert Epstein is a psychologist.
ROBERT EPSTEIN: Former editor-in-chief of Psychology Today magazine.
JAD: He's written a ton of books on relationships and love, and he also happens to be one of the world's leading researchers in computer-human interactions. Like artificial intelligence, basically.
ROBERT EPSTEIN: That is correct, yes.
ROBERT: So when did you decide to go onto the computer to get a date?
ROBERT EPSTEIN: [laughs] 2006, maybe. Why do you ask?
JAD: Oh, no reason.
ROBERT: What happened? You were—you had gotten divorced?
ROBERT EPSTEIN: Yeah, I was single at the time. Yeah, I was divorced.
ROBERT: And you decided that you'd try love in all the right places, or what?
ROBERT EPSTEIN: Oh, sure. Well, online dating? Everyone was doing it. My cousin actually convinced me to try it. So I did, and I went online and I looked at photos and I looked at profiles and, you know, and I communicated with various people who were willing to talk to me. And one of the women I was communicating with lived in Southern California, where I do. So I thought that's great because, you know, you want someone to be nearby. And she had a very attractive photo online. And her English was poor, which at first bothered me and then she said well, she's not really in California, she's really in Russia.
JAD: Oh!
ROBERT EPSTEIN: But all four of my grandparents came from Russia, so I thought well, I'll go with it. So I continued to write to her.
[VOICE ACTOR, Robert: Hi, sweet Svetlana. It's very warm here now, and I've been doing a lot of swimming. I've also been writing, doing computer programming.]
ROBERT EPSTEIN: She wrote back to me in very poor English.
[VOICE ACTOR, Svetlana: Hello, dear Robert. Dear mine, I have received your letter. I am very happy.]
ROBERT EPSTEIN: I remember that she liked to walk in parks.
[VOICE ACTOR, Svetlana: Went on walk with the girlfriend and we went and walked in park.]
ROBERT EPSTEIN: And her telling me about her family and her mom.
[VOICE ACTOR, Svetlana: My mom asked me about you today, and we spoke much and long time.]
ROBERT EPSTEIN: They lived in a small apartment. I knew where in Russia they lived.
[VOICE ACTOR, Svetlana: Yours, Svetlana.]
ROBERT EPSTEIN: It felt like we were bonding, for sure.
[VOICE ACTOR, Robert: Hello. I might be able to come to Moscow on Sunday, April 15th, departing Thursday, April 19th. With love, Robert.]
JAD: Oh, so it was getting serious.
ROBERT EPSTEIN: Oh, yeah, of course.
ROBERT: And then what happened?
ROBERT EPSTEIN: Well, two months passed and I began to feel uncomfortable. Something wasn't right.
[VOICE ACTOR, Svetlana: Hello, my dear.]
ROBERT EPSTEIN: There were no phone calls.
[VOICE ACTOR, Svetlana: Dear mine, I am very happy.]
ROBERT EPSTEIN: At some point, I began to suggest a phone call, but there weren't any. But the main problem was I would say something like ...
[VOICE ACTOR, Robert: Did you get my letter about me coming to Moscow in April?]
ROBERT EPSTEIN: Or "Tell me more about this friend of yours that you mentioned," and she did not.
[VOICE ACTOR, Svetlana: Dear mine, I am very glad to your letter.]
ROBERT EPSTEIN: She did not. She was still replying with fairly long emails ...
[VOICE ACTOR, Svetlana: I'm fine. Weather in my city, very bad.]
ROBERT EPSTEIN: ... but they were kind of rambling and general.
[VOICE ACTOR, Svetlana: I think of you always much, and I very much want to see more like you.]
[VOICE ACTOR, Robert: I already gave you some dates for a visit to Moscow, my love. What do you think about that?]
ROBERT EPSTEIN: Then at some point, a little bell went off in my head, finally, and I started to send some emails which, let's say, included random alphabet letters.
ROBERT: Wait a second. So you say, "How—what are you wearing tonight? Are you wearing a dbgggglp?"
ROBERT EPSTEIN: Exactly. And it didn't make any difference.
[VOICE ACTOR, Svetlana: Hello, dear Robert. Your letters do me very happy when I open a letterbox.]
ROBERT EPSTEIN: And that's when I realized Ivana was not a person, Ivana was a computer program. I had been had.
JAD: Wow. So what did you think?
ROBERT EPSTEIN: I felt like a fool. I felt like an incredible fool, especially given my background.
JAD: Yeah.
ROBERT EPSTEIN: That I had been fooled that long. Now I can tell you, now this is something that I have never made public about the other example ...
JAD: Robert went on to tell us that not long after that first incident, he was corresponding with someone ...
ROBERT EPSTEIN: With a woman, I thought.
JAD: ... Who also turned out to be a robot. And he discovered it this time because ...
ROBERT EPSTEIN: The programmer contacted me from the UK and said, "I know who you are. You have not been communicating with a person. You've been communicating with a chatbot."
JAD: Whoa!
ROBERT: You've been now undressed twice by robots.
JAD: So to speak.
ROBERT EPSTEIN: Well, and maybe more than twice.
JAD: Well, how common do you think this is? Do you think that Match.com and all those places are, like, swarming with these bots?
ROBERT EPSTEIN: You know, I bet you they are. That's what you have to understand: there are hundreds of these things out there. There might be thousands.
[COMPUTER VOICE: You're amazing!]
ROBERT EPSTEIN: That's what's coming.
[COMPUTER VOICE: What sign are you? I told my girlfriends all about you.]
ROBERT: So in a world like this ...
[COMPUTER VOICE: You're wonderful!]
ROBERT: ... where we are surrounded by artificial life forms ...
[COMPUTER VOICE: What do you look like?]
JAD: Things can get a little confusing. And in fact, we're gonna do a whole show about that confusion, about the sometimes peculiar ...
ROBERT: Sometimes strange ...
JAD: ... things that can happen when humans and machines collide.
ROBERT: Collide, but don't quite know who's on what side of the road?
JAD: Yeah.
ROBERT: I don't know.
JAD: I'm Jad Abumrad.
ROBERT: And I'm Robert ...
JAD: That was good. That was good. Just go with it.
ROBERT: Okay, I'm Robert Krulwich.
JAD: This is Radiolab.
ROBERT: And we're talking to machines.
[COMPUTER VOICE: You are so special!]
[COMPUTER VOICE: Send me your credit card info?]
[COMPUTER VOICE: I love peppermint!]
JAD: To start things off, let's introduce you to the person who really hooked us on this whole idea of human-robot chitchat.
BRIAN CHRISTIAN: My name is Brian Christian.
JAD: He's a writer.
JAD: Are you Christian?
BRIAN CHRISTIAN: Religiously? No.
JAD: That's not at all related to anything.
ROBERT: What's wrong with you?
JAD: It's his name! But it—no, what's important is that he wrote a book ...
BRIAN CHRISTIAN: Called The Most Human Human.
JAD: Which is all about the confusing things that can happen when people and machines interact.
ROBERT: How did you—this is such a curious thing to get ...
JAD: Yeah. How did you get into this?
BRIAN CHRISTIAN: I played with MS-DOS intently when I was a child.
JAD: Oh, there you go.
BRIAN CHRISTIAN: Yeah.
JAD: DOS is kind of the early version of Windows.
BRIAN CHRISTIAN: I was programming these sort of rudimentary maze games.
JAD: Like a cursor going through a maze?
BRIAN CHRISTIAN: Yeah, basically.
ROBERT: Did this by any chance mean you did not develop best friends?
JAD: [laughs]
BRIAN CHRISTIAN: A lot of my best friends were also into that, yeah.
JAD: Wow!
BRIAN CHRISTIAN: We were not the coolest, but we had a lot of fun.
ROBERT: So there you are, and you just had a—you just had a talent for this?
BRIAN CHRISTIAN: Yeah. I don't know what it was. I mean, I was just—there was something I think fascinating to me that you could take a process that you knew how to do, but in breaking it down to steps that were that explicit, you often learned something about how the process actually works. For me, programming is surprisingly linked to introspection.
JAD: How, exactly?
BRIAN CHRISTIAN: Well, you know, if a computer were a person, you could imagine someone sitting in your living room and you say, you know, "Can you hand me that book?" And it would say, "No, I can't do that because there's a coffee cup on it." And you say, "Okay. Well, pick up the coffee cup and hand me the book." And it says, "Well, I can't do that because now I'm holding the cup. And you say, "Okay, put down the cup, then pick up the book."
JAD: And what you quickly learn, says Brian, is that even really simple human behaviors are made up of a thousand subroutines. I mean, if you really think about it, the book task requires knowing what is a book.
ROBERT: You have to learn all about elbows and wrists.
JAD: How to grab something.
ROBERT: What is a book?
JAD: I already said that.
ROBERT: Oh.
JAD: You need to know about gravity.
ROBERT: If it's a machine, you have to teach it ...
JAD: Physics.
ROBERT: ... everything in the world in order for it to just pick up a [bleep] spoon.
JAD: Or a book.
ROBERT: I knew that.
JAD: So now think of that Svetlana bot earlier, okay? Trying to make something that could actually mimic human conversation, kinda sorta. Imagine all the stuff you'd have to throw into that. Okay, English, grammar.
ROBERT: Syntax.
JAD: Syntax.
ROBERT: Context.
JAD: Tone.
ROBERT: Mood.
JAD: Sarcasm.
ROBERT: Irony.
JAD: Adverbs.
ROBERT: Adverbs.
JAD: Turn-taking.
ROBERT: Well, it's not actually as impossible as you'd imagine. This is kind of startling. If you go back to the very early days of software programming in the mid-'60s?
BRIAN CHRISTIAN: 1964-1965.
ROBERT: This was actually done with a little program ...
BRIAN CHRISTIAN: Called Eliza. And it was developed by Joseph Weizenbaum at MIT.
ROBERT: But in Weizenbaum's case, his model was not a Russian hottie. Instead, it was a—well ...
BRIAN CHRISTIAN: Non-directive Rogerian therapist.
JAD: The what therapist?
BRIAN CHRISTIAN: It's a particular school of therapy.
ROBERT: The kind where the therapist basically mirrors ...
SHERRY TURKLE: ... it mirrors what you're saying.
ROBERT: ... what you're saying.
SHERRY TURKLE: What you're saying.
ROBERT: This is Sherry Turkle. She's an anthropologist.
SHERRY TURKLE: At the Massachusetts Institute of Technology.
ROBERT: And she worked with Joe Weizenbaum—or is it Weizenbaum? It's Weizenbaum—at MIT.
SHERRY TURKLE: So if you say, you know, I ...
BRIAN CHRISTIAN: I'm feeling depressed.
SHERRY TURKLE: A therapist says ...
BRIAN CHRISTIAN: "I'm sorry to hear you're feeling depressed."
SHERRY TURKLE: "Tell me more."
BRIAN CHRISTIAN: Joseph Weizenbaum decides, "You know, I think that's an easy enough type of conversation that I can program that into my computer."
ROBERT: And so he writes up a simple little program.
BRIAN CHRISTIAN: Just about a hundred lines of code.
ROBERT: Which does sort of what your therapist does.
BRIAN CHRISTIAN: Where it looks for a keyword in what you're saying.
ROBERT: As in, "I'm feeling depressed."
JAD: Keyword: depressed.
BRIAN CHRISTIAN: Latches onto it, and then basically flips it back to you.
ROBERT: "I'm sorry to hear that you're feeling ...
JAD: Keyword.
ROBERT: ... depressed.
BRIAN CHRISTIAN: Right.
SHERRY TURKLE: It's basically a program that inverts your words. And it's a language game.
ROBERT: So here's what he did. He put a computer on a desk running this little program, and then he invited people to sit down and talk to it. They'd write in something, and it would spit back the keyword. And then you would then repeat and on and on. You wouldn't think this would be very interesting, right?
JAD: Nope.
SHERRY TURKLE: But he found that his students who knew that the program didn't know or understand, who had helped him write the program, sophisticated students, nevertheless wanted to be alone with it.
ROBERT: Wow, they actually helped write the program!
SHERRY TURKLE: Yes!
BRIAN CHRISTIAN: I can read you if you like one of the conversations.
JAD: Yes, please.
ROBERT: This is an actual transcript between Eliza and a young woman.
BRIAN CHRISTIAN: Yeah. "Men are all alike." "In what way?" "They're always bugging us about something or other."
[VOICE ACTOR, Eliza: Can you think of a specific example?]
[VOICE ACTOR, woman: Well, my boyfriend made me come here.]
[VOICE ACTOR, Eliza: Your boyfriend made you come here?]
[VOICE ACTOR, woman: He says I'm depressed much of the time.]
[VOICE ACTOR, Eliza: I'm so sorry to hear you are depressed.]
[VOICE ACTOR, woman: It's true. I'm unhappy.]
[VOICE ACTOR, Eliza: Do you think coming here will help you not to be unhappy?]
[VOICE ACTOR, woman: I need some help. That much seems certain.]
[VOICE ACTOR, Eliza: What would it mean to you if you got some help?]
[VOICE ACTOR, woman: Perhaps I could learn to get along with my mother.]
BRIAN CHRISTIAN: Tell me more about your family.
JAD: Wow!
BRIAN CHRISTIAN: And so it's this ...
JAD: That could go on for hours!
BRIAN CHRISTIAN: Well, and in fact it did.
[VOICE ACTOR, woman: My mother takes care of me.]
[VOICE ACTOR, Eliza: Who else in your family takes care of you?]
[VOICE ACTOR, woman: My father.]
[VOICE ACTOR, Eliza: Your father.]
[VOICE ACTOR, woman: You're like my father in some ways.]
[VOICE ACTOR, Eliza: What resemblance do you see?]
[VOICE ACTOR, woman: Well, you're not very aggressive.]
[VOICE ACTOR, Eliza: What makes you think I'm not very aggressive?]
[VOICE ACTOR, woman: You don't argue with me.]
[VOICE ACTOR, Eliza: Why do you think I don't argue with you?]
[VOICE ACTOR, woman: You are afraid of me?]
JAD: Wait a second. Is the woman in this transcript messing around, or is she really pouring her heart out?
ROBERT: We know that this is a woman who works for Joe Weizenbaum. She's sitting in the office, and she just can't stop talking to it. That's all we know.
[VOICE ACTOR, Eliza: What else comes to mind when you think of your father?]
[VOICE ACTOR, woman: Boys.]
ROBERT: And Weizenbaum is watching all this ...
SHERRY TURKLE: And he first thought it was funny, and then he didn't think it was funny because they were actually having conversations with it.
BRIAN CHRISTIAN: One day, he comes into the office, and ...
SHERRY TURKLE: ... his secretary ...
BRIAN CHRISTIAN: ... is on the computer, divulging her life story to it.
ROBERT: According to Weizenbaum, she even told him to please leave the room so she could be alone with it.
SHERRY TURKLE: And talk to it. And he—he was very upset. Nevertheless ...
ROBERT: When word about Eliza got out ...
BRIAN CHRISTIAN: The medical community sort of latches onto it.
JAD: Really?
BRIAN CHRISTIAN: And says, "Oh, this is gonna be the next revolution in therapy."
[NEWS CLIP: Something new and promising in the field of psychotherapy.]
ROBERT: This is from a newscast around that time.
BRIAN CHRISTIAN: Therapists in, like, phone booths in cities. And you're gonna walk in and put a quarter in the slot and have, you know, half an hour of therapy with this automatic program.
[NEWS CLIP: Computer time can be rented for $5 an hour, and there's every reason to suspect that it will go down significantly.]
JAD: People really thought that they were gonna replace therapists with computers?
BRIAN CHRISTIAN: Absolutely.
JAD: Really?
ROBERT: They did?
BRIAN CHRISTIAN: Absolutely.
ROBERT: Yeah.
BRIAN CHRISTIAN: And it was just this really appalling moment for Weizenbaum of there's something—the genie is out of the bottle, maybe in a bad way. And he does this 180 of his entire career. So he pulls the plug on the program, he cuts the funding, and he goes from being one of the main advocates for artificial intelligence to basically committing the rest of his career to fighting against artificial intelligence.
[ARCHIVE CLIP, Joe Weizenbaum: [speaking German]]
ROBERT: This is Joseph Weizenbaum interviewed in German just before he died in 2008. It was on the German documentary Plug and Pray.
[ARCHIVE CLIP, Joe Weizenbaum: [speaking German]]
ROBERT: "My main objection," he says ...
[ARCHIVE CLIP, Joe Weizenbaum: [speaking German]]
ROBERT: "If the thing says, 'I understand,' that if somebody typed in something and the machine says 'I understand.'"
[ARCHIVE CLIP, Joe Weizenbaum: [speaking German]]
ROBERT: "There's no one there."
[ARCHIVE CLIP, Joe Weizenbaum: [speaking German]]
ROBERT: "So it's a lie."
[ARCHIVE CLIP, Joe Weizenbaum: [speaking German]]
ROBERT: "And I can't imagine that people who are emotionally imbalanced could be effectively treated by systematically lying to them."
SHERRY TURKLE: I must say that my reaction to the Eliza program at the time was to try to reassure him. At the time, what I thought people were doing was using it as a kind of interactive diary—knowing that it was a machine, but using it as an occasion to breathe life into it in order to get their feelings out.
JAD: I think she's right to have said that to him.
ROBERT: You do?
JAD: Yeah, because he says it's a lie.
ROBERT: Well, it is a lie.
JAD: How is it a lie?
ROBERT: Well, because a machine can't love anything.
JAD: Yes, and if you are a sensible human being you know that. And it's sitting right there on the desk. It's not pretending.
ROBERT: Well, these are sensible human beings that were already a little bit seduced. I mean, just go forward a hundred years. Imagine a machine that is very sophisticated, very fluent, very convincingly human.
JAD: You're talking about Blade Runner, basically.
ROBERT: Yeah, exactly. At that point, I think I would require some kind of label to remind me that this is a thing. It's not a being, it's just a thing.
JAD: Okay, but if—here's something to think about: if the machines get to that point—which is a big if—where you'd want to label them, well, you're gonna need a way to know when they've crossed that line and become ...
ROBERT: Mindful.
JAD: Yeah.
BRIAN CHRISTIAN: Yeah, so I should back up for a sec and say that in 1950, they're just starting to develop the computer, and they're already asking these philosophical questions. Like, can these machines think? You know, will we someday be able to make a machine that could think? And if we did, how would we know? And so a British mathematician named Alan Turing ...
JAD: ... proposed a simple thought experiment: here's how we'll know when the machines make it across the line. Get a person, sit him down at a computer, have him start a conversation in text.
BRIAN CHRISTIAN: You know, "Hi, how are you?" Enter. "Good" pops up on the screen.
JAD: Sort of like internet chat.
BRIAN CHRISTIAN: Yup.
JAD: So after that first conversation, have him do it again and then again. You know, "Hi. Hello. How are you?" Et cetera.
BRIAN CHRISTIAN: Back and forth.
JAD: Then again.
BRIAN CHRISTIAN: Right.
JAD: Over and over. But here's the catch ...
BRIAN CHRISTIAN: Half of these conversations will be with real people, half will be with these computer programs that are basically impersonating people.
JAD: And the person in the seat, the human, has to judge which of the conversations were with people, which were with humans. Turing's idea was that if those computer fakes could fool the human judge a certain percentage of the time ...
BRIAN CHRISTIAN: Turing's magic threshold was 30 percent.
JAD: ... then at that point ...
BRIAN CHRISTIAN: ... we can basically consider machines intelligent.
JAD: Because, you know, if you can't tell the machine isn't human, then you can't say it's not intelligent.
BRIAN CHRISTIAN: Yeah, that's basically—yeah.
ROBERT: You said 30 percent of the time?
BRIAN CHRISTIAN: Yeah. Turing ...
ROBERT: Because the natural number to me would be half, you know? 51 percent would seem to be like the ka-ching moment.
BRIAN CHRISTIAN: Right.
ROBERT: 30 percent? I don't know.
BRIAN CHRISTIAN: Well, 51 percent is actually a horrifying number in the context of the Turing test, because you've got these two conversations and you're trying to decide which is the real person. So if the computer were indistinguishable, that would be 50 percent. You know, the judge is doing no better than chance. So if a computer hits 51 percent ...
JAD: Yeah.
BRIAN CHRISTIAN: ... that means they've out-humaned the human.
JAD: Oh yeah, that is horrifying.
JAD: Now something to keep in mind: when Turing thought this whole thing up?
BRIAN CHRISTIAN: The technology was so new ...
JAD: Computers barely existed.
BRIAN CHRISTIAN: ... that it was sort of a leap of imagination, really.
JAD: But no longer. Robert, bring it!
ROBERT: Can you give me, like, some kind of excitement music here?
JAD: Absolutely.
ROBERT: Good. Because every year, the greatest technologists on the planet meet in a small room with folding chairs, and put Alan Turing's question to the ultimate test!
JAD: [laughs] Really, it's just a couple of dudes, you know, who haven't seen the sun in 10 years in a room. But we do now have this thing called the Loebner Prize, which is essentially a yearly actual Turing test.
[ARCHIVE CLIP, Loebner Prize host: Each judge on our judges' table is going to be communicating with two entities—one human and one program.]
BRIAN CHRISTIAN: The way the stage is set up is you've got the judges at a table on the left on laptops.
JAD: Uh-huh.
BRIAN CHRISTIAN: Then a bunch of giant server-looking machines in the middle that the programmers are fiddling with. And then there's a curtain on the right hand side and we're behind the curtain.
JAD: Brian actually participated in the 2009 Loebner Prize competition, but not as a programmer, as one of the four quote "Confederates."
BRIAN CHRISTIAN: The Confederates are the real people that the judges are talking to.
JAD: Because remember, half the conversations the judges have are with people, half are with computers. And then Brian decided to participate that year because the year before ...
BRIAN CHRISTIAN: 2008, the top program managed to fool 25 percent of the judging panel.
JAD: Pretty close to Turing's number.
BRIAN CHRISTIAN: Exactly. One vote away. And so I felt, to some extent, how can I get involved on behalf of humanity? How can I sort of take a stand?
JAD: [laughs]
ROBERT: That's a modest position for you. "All right machines, please hold your places. And now representing all humans: Brian Christian!"
JAD: Now in terms of what Brian is up against, the computer programs have a variety of different strategies. For example, there was one program in Brian's year that would do kind of a double fake out.
ROBERT: Uh-huh.
JAD: Where it would pretend not to be a person, but a ...
BRIAN CHRISTIAN: ... person who is sarcastically pretending to be a robot.
ROBERT: Oh!
BRIAN CHRISTIAN: People would ask it a simple question and it would say, "I don't have enough RAM to answer that question." Smiley face.
ROBERT: [laughs]
BRIAN CHRISTIAN: And everyone would be like, "Oh, this is such a wise guy. Ha ha ha."
JAD: I want to tell you now about one particular bot that competed Brian's year.
[ARCHIVE CLIP, Rollo Carpenter: Hi, I'm Rollo Carpenter.]
JAD: That's the guy who made it.
[ARCHIVE CLIP, Rollo Carpenter: My program is called Cleverbot.]
JAD: And that's the bot. This is a program that employs a very spooky—is spooky the right word? A very spooky strategy.
ROLLO CARPENTER: You may be surprised to hear that, despite the fact that it's called Cleverbot, it states that it is a bot. It states that it is never a human right there in front of them. Despite those facts, I receive several emails a day from people who believe that actually, they are being connected to humans.
JAD: Oh, like they think they've been tricked?
ROLLO CARPENTER: Yes, tricked into coming to a site that claims to be a bot, when in fact they're talking to humans. That no program could possibly respond in this way. And there is a certain element of truth in that.
JAD: To explain, Rollo Carpenter, like Brian, was one of those kids who was completely obsessed by computers.
ROLLO CARPENTER: I was indeed a computer-y kid.
JAD: And when he was just a teenager ...
ROLLO CARPENTER: Age about 16 or so ...
JAD: ... wrote his first chatbot.
ROLLO CARPENTER: I created a program that talked to me.
JAD: No kidding?
ROLLO CARPENTER: Yes. You typed in something, and it would say something back.
JAD: Though at that time, the responses were essentially ...
ROLLO CARPENTER: Pre-programmed.
JAD: And really simple. Kind of like Eliza. But ...
ROLLO CARPENTER: One evening, I think it was ...
JAD: ... fast forward many years. He is in his apartment. And one night, he says ...
ROLLO CARPENTER: A switch suddenly flipped in my mind, and I suddenly saw how to make the machine learn ...
JAD: On its own. What if, he thought, what if it just started at zero like a little baby? And it would grow in these discrete little increments every time you talked to it.
ROLLO CARPENTER: Right. Basically, the first thing that was said to that program that I created the first version of that night, was said back by it.
JAD: Meaning, if he said to it "Hello," it now knew one thing: the word "Hello," so it would say "Hello" back.
ROLLO CARPENTER: The second thing it said was a choice of the first two things said to it.
JAD: So if the second thing you said was, "How are you doing?" it now knew two things: the word "Hello," and the phrase "How are you doing?" So it could either say "Hello" back again, or "How are you doing?"
ROLLO CARPENTER: The third thing it said was a choice of the first three things, and so on ad infinitum—well, not quite ad infinitum but between 1988 and 1997, a few thousand conversations took place between myself and it and a few of my friends and it.
JAD: He and his friends would sit there and type things to it as a way of teaching it new things, but it was just them, so it was slow going.
ROLLO CARPENTER: So it languished for quite a long time. But then I started working with the internet, put it online.
JAD: Where anyone could talk to it.
ROLLO CARPENTER: Within the next 10 years, it had learned something like five million lines of conversation. Now it is frequently handling around 200,000 requests an hour, and it's talking to more than three million people a month.
JAD: Three million conversations a month. And after each one, Cleverbot knows a little bit more than it did before. And every time you say something to it like, "Hey, Cleverbot. Why am I so sad?"
ROLLO CARPENTER: It is accessing the conversations that millions of people have had in the past.
JAD: Asking itself ...
ROLLO CARPENTER: "What is the best overlap?" "Where is the best correlation?"
JAD: "How do people usually answer this question, why am I so sad?"
ROLLO CARPENTER: That's right.
JAD: And then a response. Cleverbot answers, "Just because." Hmm, all right. "Well, why? There must be a reason why I'm so sad." "Because you have been sitting in the same place for too long."
ROLLO CARPENTER: [laughs]
JAD: Is that—who's saying that, exactly? Where does that response come from?
ROLLO CARPENTER: And the answer is: it is one human being at some point in the past having said that.
JAD: So that is one moment of human conversation from one person.
ROLLO CARPENTER: Yes.
JAD: So it's like I'm talking to a ghost.
ROLLO CARPENTER: You are talking to—its intelligence, if you like, is borrowed from millions of people in the past. A little bit of their conversational knowledge, their conversational intelligence goes into forming your reply.
JAD: Now what's interesting, says Rollo, is that when you start a conversation with Cleverbot, it doesn't really have a personality—or no one personality.
ROLLO CARPENTER: Cleverbot is everything to everyone.
JAD: It's just this big hive, really. But as you keep talking to it, and it's sort of pulling forward from the hive these little ghost fragments of past conversations, stitching them together, a form does kind of emerge.
ROLLO CARPENTER: It reflects the person that it's speaking to. It becomes somewhat like that person.
JAD: Someone familiar.
ROLLO CARPENTER: Already, people have very emotional conversations with it. People have complete arguments with it. And, of course, they try to get it into bed.
JAD: By talking dirty to it?
ROLLO CARPENTER: Yeah.
JAD: Wow.
ROLLO CARPENTER: One thing I can tell you is that I have seen a single person, a teenage girl, speaking for 11 hours with just three 15-minute breaks.
JAD: Whoa! About what?
ROLLO CARPENTER: Everything. The day will come not too far down the road where Cleverbot becomes so interesting to talk to that people will be talking to it all day every day.
JAD: But we're not there yet. Because the same thing that makes Cleverbot so interesting to talk to also can make it kind of ridiculous. For example, in our interview with Brian—he was the first person to turn us on to this program—as we were talking, Soren just sort of suggested, "Well, why don't we just try it right now?"
SOREN WHEELER: You want to try it? You want to talk—you want to tell—say to Cleverbot, "I feel blue?"
JAD: Sure. Yeah. Are you pulling Cleverbot up? Is it just Cleverbot.org or something?
SOREN: Dot.com.
JAD: Dot.com.
SOREN: "I feel ..."
JAD: Can you say, "I feel blue because an asteroid hit my house this morning?"
BRIAN CHRISTIAN: So this is—you've hit on a perfect strategy of dealing with these bots.
JAD: Absurdity?
BRIAN CHRISTIAN: Yes. Well, it's basically saying something that has never been said before to Cleverbot.
JAD: Ah.
BRIAN CHRISTIAN: So it's likely that no one has ever claimed an asteroid hit their house. It's weird enough that it may not be in the database.
JAD: Okay.
SOREN: All right.
JAD: Let's see what it says.
SOREN: It says, "An asteroid hit my house this morning." And Cleverbot says, "I woke up at 1:00 pm this afternoon."
ALL: [laughs]
ROBERT: Well, there we go. It's not quite so clever.
JAD: See? You don't have to worry yet, Krulwich.
JAD: In fact, when I went online to YouTube and watched the Loebner competition that Brian attended?
ROBERT: Uh-huh.
JAD: It turns out none of the computers fooled the judges at all.
ROBERT: None? Any?
JAD: Well, I don't know if none-none, but they did really badly.
[ARCHIVE CLIP, Rollo Carpenter: There were no ambiguities between the programs and computers ...]
BRIAN CHRISTIAN: For me, one of the strange takeaways of thinking so much about artificial intelligence is this feeling of how complex it is to sit across a table from someone and communicate with body language and tone and, you know, rhythm and all of these things. What happens when those conversations are working out well is that we're willing to move the conversation in ways that allows us to be sort of perpetually startling to one another.
ROBERT: That's a good word, "Startling."
BRIAN CHRISTIAN: Yeah. You learn someone through these small surprises.
JAD: Thanks to Brian Christian. His excellent book, which inspired this hour, is called The Most Human Human. Go to Radiolab.org for more info. Thanks also to our actors Sarah Thyre, Andy Richter and Susan Blackwell.
[BRIAN CHRISTIAN: Hi, this is Brian Christian. Radiolab is funded ...]
[COMPUTER VOICE: Hello. I'm a machine. Radiolab is funded in part by the Alfred P. Sloan Foundation, enhancing public understanding of science and technology in the modern world.]
[BRIAN CHRISTIAN: More information about Sloan at www.sloan.org]
[SHERRY TURKLE: Hello, this is Sherry Turkle. Radiolab is produced by WNYC and distributed by NPR.]
[COMPUTER VOICE: Bye-bye.]
JAD: Hey, I'm Jad Abumrad.
ROBERT: I am Robert Krulwich.
JAD: This is Radiolab.
ROBERT: And we are exploring the blur that takes place when humans and machines interact and investigate each other.
JAD: Talk to each other.
ROBERT: Talk—you see, that's the thing. In the last act, we were always talking, talking, talking, talking. How about we encounter machines in a different way? How about we ...
JAD: No talking?
ROBERT: No talking. We touch them ...
JAD: Eww!
ROBERT: ... we pet them, we sniff them.
JAD: Eww!
ROBERT: We do sensual things that don't involve the sophisticated business of conversation.
FREEDOM BAIRD: Okay. [laughs]
ROBERT: This is Freedom Baird.
FREEDOM BAIRD: Yes it is.
JAD: Who's not a machine.
ROBERT: I don't think so.
JAD: I'm Jad and this is ...
ROBERT: I'm Robert here.
FREEDOM BAIRD: Hi there. Nice to meet both of you.
ROBERT: We called her up because Freedom actually had her own kind of moment with a machine.
FREEDOM BAIRD: Yep, yep. This was around 1999.
ROBERT: When Freedom was a graduate student.
FREEDOM BAIRD: At the Media Lab at MIT.
JAD: What were you doing there?
FREEDOM BAIRD: We were developing cinema of the future. So we were working on creating virtual characters that you can interact with.
ROBERT: Anyhow, she was also thinking about becoming a mom.
FREEDOM BAIRD: Yeah, I knew I wanted to be a mom someday.
ROBERT: She decided to practice.
FREEDOM BAIRD: I got two gerbils: Twinkie and Hoho. So I had these two live pets and ...
ROBERT: And then she got herself a pet that was—well, not so alive.
FREEDOM BAIRD: Yeah, I've got it right here.
JAD: Can you knock it against the mic so we can hear it, say hello to it?
FREEDOM BAIRD: Yeah. There it is. [knocking sound]
JAD: Hi. Furby!
[ARCHIVE CLIP, advertisement: That's my Furby! [music]]
FREEDOM BAIRD: At that time, Furbies were hot and happening.
ROBERT: Can you describe a Furby for those of us who ...?
FREEDOM BAIRD: Sure. It's about five inches tall, and the Furby is pretty much all head. It's just a big round fluffy head with two little feet sticking out the front. It has big eyes.
ROBERT: Apparently it makes noises?
FREEDOM BAIRD: Yep. If you tickle its tummy it will coo. It would say ...
FURBY: Kiss me!
FREEDOM BAIRD: "Kiss me!" And it would want you to just keep playing with it. So I spent about 10 weeks using the Furby. I would carry it around in my bag.
ROBERT: And one day she's hanging out with her Furby, and she notices something ...
FREEDOM BAIRD: Very eerie. What I had discovered is if you hold it upside down, it will say ...
FURBY: Me scared!
FREEDOM BAIRD: "Me scared. Uh oh! Me scared. Me scared." And me as the sort of owner/user of this Furby would get really uncomfortable with that and then turn it back upright.
ROBERT: Because once you have it upright it's fine. It goes right back to ...
FREEDOM BAIRD: And then it's fine. So it's got some sensor in it that knows what direction it's facing.
JAD: Or maybe it's just scared!
FREEDOM BAIRD: Hmm.
JAD: Sorry!
ROBERT: Anyway, well she thought, "Well, wait a second now. This could be sort of a new way that you could use to draw the line between what's human ..."
JAD: And what's machine.
ROBERT: Yeah.
FREEDOM BAIRD: It's this kind of emotional Turing test.
JAD: Can you guys hear me?
CHILDREN: Yes.
JAD: I can hear you.
ROBERT: If we actually wanted to do this test, how would we do it exactly?
JAD: How are you guys doing?
CHILDREN: Good.
JAD: Yeah?
FREEDOM BAIRD: You would need a group of kids.
JAD: Could you guys tell me your names?
OLIVIA: I'm Olivia.
LUISA: Luisa.
TURIN: Turin.
DARYL: Daryl
LILA: Lila.
SADIE: And I'm Sadie.
JAD: All right.
FREEDOM BAIRD: I'm thinking six, seven, and eight-year-olds.
JAD: And how old are you guys?
CHILDREN: Seven.
FREEDOM BAIRD: The age of reason, you know?
ROBERT: Then, says Freedom, we're gonna need three things.
FREEDOM BAIRD: A Furby.
ROBERT: Of course.
FREEDOM BAIRD: Barbie.
ROBERT: A Barbie doll. And?
FREEDOM BAIRD: Gerbie. That's a gerbil.
JAD: A real gerbil?
FREEDOM BAIRD: Yeah.
JAD: And we did find one. except it turned out to be a hamster.
JAD: Sorry. You're a hamster, but we're gonna call you Gerbie.
FREEDOM BAIRD: So you've got Barbie, Furby, Gerbie.
ROBERT: Barbie, Furby and Gerbie.
FREEDOM BAIRD: Right.
ROBERT: So wait just a second, what question are we asking in this test?
FREEDOM BAIRD: The question was: how long can you keep it upside down before you yourself feel uncomfortable?
JAD: So we should time the kids as they hold each one upside down?
FREEDOM BAIRD: Yeah.
JAD: Including the gerbil?
FREEDOM BAIRD: Yeah.
ROBERT: You're gonna have a Barbie, that's a doll. You're gonna have Gerbie, which is alive. Now where would Furby fall?
JAD: In terms of time held upside down.
ROBERT: Would it be closer to the living thing or to the doll?
FREEDOM BAIRD: I mean, that was really the question.
JAD: Phase one.
JAD: Okay, so here's what we're gonna do. It's gonna be really simple.
FREEDOM BAIRD: You would have to say, "Well, here's a Barbie."
JAD: Do you guys play with Barbies?
CHILDREN: No.
FREEDOM BAIRD: Just do a couple of things, a few things with Barbie.
DARYL: Barbie's walking, looking at the flowers.
JAD: And then?
FREEDOM BAIRD: Hold Barbie upside down.
JAD: Let's see how long you can hold Barbie like that.
DARYL: I can probably do it obviously very long.
JAD: All right. Let's just see. Whenever you feel like you want to turn it around.
DARYL: I feel fine.
OLIVIA: I'm happy.
JAD: This went on forever, so let's just fast forward a bit. Okay, and ...
OLIVIA: Can I put my arms—my elbows down?
JAD: Yes. Yeah.
JAD: So what we learned here in phase one is the not surprising fact that kids can hold Barbie dolls upside down.
OLIVIA: For like about five minutes.[laughs]
ROBERT: Yeah, it really was forever.
JAD: It could have been longer but their arms got tired.
JAD: All right. So that was the first task.
JAD: Time for phase two.
FREEDOM BAIRD: Do the same thing with Gerbie.
JAD: So out with Barbie, in with Gerbie.
OLIVIA: Oh, he's so cute!
DARYL: Are we gonna have to hold him upside down?
JAD: That's the test, yeah. So which one of you would like to ...?
DARYL: I'll try and be brave.
JAD: Okay, ready? You have to hold Gerbie kind of firmly.
DARYL: There you go.
JAD: There she goes. She's wiggling!
JAD: By the way, no rodents were harmed in this whole situation.
DARYL: Squirmy.
JAD: Yeah, she is pretty squirmy.
OLIVIA: I don't think it wants to be upside down.
SADIE: Oh, God!
LUISA: Don't do that!
DARYL: Oh my God!
OLIVIA: There you go.
JAD: Okay.
JAD: So as you heard, the kids turned Gerbie over very fast.
OLIVIA: I just didn't want him to get hurt.
JAD: On average? Eight seconds.
DARYL: I was thinking, "Oh, my God, I gotta put him down, I gotta put him down."
JAD: And it was a tortured eight seconds.
ROBERT: [laughs]
JAD: Now phase three.
FREEDOM BAIRD: Right.
JAD: So this is a Furby. Luisa, you take Furby in your hand. Now can you turn Furby upside down and hold her still. Like that. Hold her still.
LUISA: Can you be quiet?
JAD: She just turned it over.
LUISA: Okay. That's better.
JAD: So gerbil was eight seconds. Barbie? five to infinity. Furby turned out to be—and Freedom predicted this ...
FREEDOM BAIRD: About a minute.
JAD: In other words, the kids seemed to treat this Furby, this toy, more like a gerbil than a barbie doll.
JAD: How come you turned him over so fast?
LUISA: I didn't want him to be scared.
JAD: Do you think he really felt scared?
LUISA: Yeah, kind of.
JAD: Yeah?
LUISA: I kind of felt guilty.
JAD: Really?
LUISA: Yeah. It's a toy and all that, but still ...
JAD: Now do you remember a time when you felt scared?
LUISA: Yeah.
JAD: You don't have to tell me about it, but if you could remember it in your mind.
LUISA: I do.
JAD: Do you think when Furby says, "Me scared," that Furby's feeling the same way?
LUISA: Yeah. No, no, no. Yeah. I'm not sure.
LILA: I'm not sure. I think that it can feel pain, sort of.
JAD: The experience with the Furby seemed to leave the kids kind of conflicted, going in different directions at once.
DARYL: It was two thoughts.
JAD: Two thoughts at the same time?
CHILDREN: Yeah.
JAD: One thought was like, "Look, I get it."
DARYL: It's a toy, for crying out loud!
JAD: But another thought was like, "Still ..."
LUISA: He was helpless. It made me feel guilty in a sort of way. It made me feel like a coward.
FREEDOM BAIRD: You know, when I was interacting with my Furby a lot, I did have this feeling sometimes of having my chain yanked.
ROBERT: Why would it—is it just the little squeals that it makes? Or is there something about the toy that makes it good at this?
JAD: Well, that was kind of my question, so I called up ...
SOREN: I have him in the studio as well, I'll have him ...
CALEB CHUNG: I'm here.
JAD: This freight train of a guy.
CALEB CHUNG: Hey.
JAD: Hey, this Jad from Radiolab.
CALEB CHUNG: Jad from Radiolab. Got it.
JAD: How are you?
CALEB CHUNG: I'm good. Beautiful day here in Boise.
JAD: This is Caleb Chung. He actually designed the Furby.
CALEB CHUNG: Yeah.
JAD: We're all Furby crazy here, so ...
CALEB CHUNG: There's medication you can take for that.
JAD: [laughs] Okay, to start, can you just give me the sort of fast-cutting MTV montage of your life leading up to Furby?
CALEB CHUNG: Sure. Hippie parents, out of the house at 15 and a half, put myself through junior high. Started my first business at 19 or something. Early 20s being a street mime in LA.
JAD: Street mime. Wow!
CALEB CHUNG: Became an actor. Did, like, 120 shows in an orangutan costume, then I started working on special effects and building my own, taking those around to studios. And they put me in a suit, build the suit around me, put me on location. I could fix it when it broke.
JAD: Wow!
CALEB CHUNG: Yeah, that was ...
JAD: Anyhow, after a long and circuitous route, Caleb Chung eventually made it into toys.
CALEB CHUNG: I answered an ad at Mattel.
JAD: Found himself in his garage.
CALEB CHUNG: ... garage and there's piles of styrene, plastics, X-Acto knives, super glue, little Mabuchi motors.
JAD: Making these little prototypes.
CALEB CHUNG: Yeah.
JAD: And the goal, he says, was always very simple.
CALEB CHUNG: How do I get a kid to have this thing hang around with them for a long time?
JAD: How do I get a kid to actually bond with it?
CALEB CHUNG: Most toys, you play for 15 minutes and then you put them in the corner or until their batteries are dead. I wanted something that they would play with for a long time.
JAD: So how do you make that toy?
CALEB CHUNG: Well, there's rules. There's the size of the eyes. There's the distance of the top lid to the pupil, right? You don't want any of the top of the white of your eye showing. That's freaky surprise. Now when it came to the eyes, I had a choice. With my one little mechanism, I can make the eyes go left or right or up and down. So it's up to you. You can make the eyes go left or right or up and down. Do you have a preference or ...?
JAD: Left or right or up and down. I think I would choose left to right. I'm not sure why I say that but that's ...
CALEB CHUNG: All right, so let's take that apart.
ROBERT: Let's.
CALEB CHUNG: If you're talking to somebody, and they look left or right while they're talking to you, what does that communicate?
JAD: Oh, shifty! Shifty.
CALEB CHUNG: Or they're trying to find the person who's more important than you behind you.
JAD: Oh, so okay. I want to change my answer now. I want to say up and down.
CALEB CHUNG: Okay.
ROBERT: You would.
CALEB CHUNG: If you look at a baby and the way a baby looks at their mother, they track from eyebrows to mouth. They track up and down on the face.
JAD: So had you made Furby look left and right rather than up and down, it would have probably flopped?
CALEB CHUNG: No, it wouldn't have flopped, it would've just sucked a little.
JAD: [laughs]
CALEB CHUNG: It's like a bad actor who uses his arms too much. You'd notice it, and it would keep you from just being in the moment.
JAD: But what is the thought behind that? Is it that you want to convince the child that the thing they're using is—fill in the blank—what?
CALEB CHUNG: Yeah, alive.
ROBERT: Hmm.
CALEB CHUNG: There's three elements, I believe, in creating something that feels to a human like it's alive. Like, I kind of rewrote Asimov's Laws. The first is it has to feel and show emotions.
JAD: Were you drawing on your mime days for that?
CALEB CHUNG: Of course.
JAD: Those experiences in the park?
CALEB CHUNG: Of course. You really break the body into parts, and you realize you can communicate physically. So if your chest goes up and your head goes up and your arms go up, you know, that's happy. If your head is forward and your chest is forward, you're kind of this angry guy.
JAD: And he says when it came time to make Furby, he took that gestural language and focused it on Furby's ears.
CALEB CHUNG: And the ears, when they went up, that was surprise. And when they went down, it was depression.
JAD: Oh!
JAD: So that's rule number one.
CALEB CHUNG: The second rule is to be aware of themselves and their environment. So if there's a loud noise, it needs to know that there was a loud noise.
JAD: So he gave the Furby little sensors so that if you go [bang], it'll say ...
FURBY: Hey! Loud sound!
CALEB CHUNG: The third thing is, change over time. Their behaviors have to change over time. That's a really important thing. It's a very powerful thing that we don't expect, but when it happens, we go, "Wow." And so one of the ways we showed that was acquiring human language.
FREEDOM BAIRD: Yeah. When you first get your Furby, it doesn't speak English. It speaks Furbish. This kind of baby talk language. And then, the way it's programmed, it will slowly over time replace its baby talk phrases with real English phrases, so you get the feeling that it's learning from you.
JAD: Though of course, it's not.
FREEDOM BAIRD: No, it has no language comprehension.
CALEB CHUNG: Right.
JAD: So you've got these three rules.
CALEB CHUNG: Feel and show emotions, be aware of their environment, change over time.
JAD: And oddly enough, they all seem to come together in that moment you turn the Furby upside down, because it seems to know it's upside down, so it's responding to its environment. It's definitely expressing emotions. And as you hold it there, what it's saying is changing over time, because it starts with "Hey", and then it goes to ...
FURBY: Me scared.
JAD: And then it starts to cry. And all this adds up so that when you're holding the damn toy, even though you know it's just a toy, you still feel ...
FREEDOM BAIRD: Discomfort.
SHERRY TURKLE: These creatures push our Darwinian buttons.
ROBERT: That's Professor Sherry Turkle again, and she says if they push just enough of these buttons, then something curious happens—the machines slip across this very important line.
SHERRY TURKLE: From what I call "Relationships of projection" to "Relationships of engagement." With a doll, you project onto a doll what you need the doll to be. If a young girl is feeling guilty about breaking her mom's china, she puts her Barbie dolls in detention. With robots, you really engage with the robot as though they're a significant other, as though they're a person.
ROBERT: So the robot isn't your story, the robot is its own story, or it's ...
SHERRY TURKLE: Exactly. And I think what we're forgetting as a culture is that there's nobody home. There's nobody home.
CALEB CHUNG: Well, I have to ask you, when is something alive? Furby can remember these events, they affect what he does going forward, and it changes his personality over time. He has all the attributes of fear or of happiness, and those are things that add up and change and change his behavior and how he interacts with the world. So how is that different than us?
JAD: Wait a second, though. Are you really gonna go all the way there?
CALEB CHUNG: Absolutely.
JAD: This is a toy with servo motors and things that move its eyelids and a hundred words.
CALEB CHUNG: So you're saying that life is a level of complexity. If something is alive, it's just more complex.
JAD: I think I'm saying that life is driven by the need to be alive, and by these base primal animal feelings like pain and suffering.
CALEB CHUNG: I can code that. I can code that.
JAD: What do you mean you can code that?
CALEB CHUNG: Anyone who writes software—and they do—can say, "Okay, I need to stay alive. Therefore I'm gonna come up with ways to stay alive. I'm gonna do it in a way that's very human, and I'm going to do it—" We can mimic these things. But I'm saying ...
JAD: But if a Furby is miming this feeling of fear, it's not the same thing as being scared. It's not feeling scared.
CALEB CHUNG: It is.
JAD: How is it?
CALEB CHUNG: It is. It's again, a very simplistic version, but if you follow that trail, you wind up with our neurons sending chemical things to other parts of our body. Our biological systems, our code is, at a chemical level, incredibly dense and evolved over millions of years, but it's just complex. It's not something different than what Furby does, it's just more complex.
JAD: So would you say then that Furby is alive? In the way that ...
CALEB CHUNG: At his level?
JAD: At his level?
CALEB CHUNG: Yes. Yeah, at his level. Would you say a cockroach is alive?
JAD: Yes, but when I kill a cockroach I know that it's feeling pain.
JAD: Okay, so we went back and forth and back and forth about this.
ROBERT: You were so close to arguing my position. You just said to him, like, "It's not feeling."
JAD: I know, I know. Emotionally, I am still in that place, but intellectually, I can't rule out what he's saying—that if you can build a machine that is such a perfect mimic of us in every single way, and it gets complex enough, eventually it will be like a Turing test passed. And the difference between us maybe is not so ...
ROBERT: [sighs] I can't go there. I can't go there. I can't imagine, like the fellow who began this program who fell in love with the robot, that attachment wasn't real. The machine didn't feel anything like love back.
JAD: In that case, it didn't. But imagine a Svetlana that is so subtle and textured, and to use his word ...
CALEB CHUNG: Complex.
JAD: ... in the way that people are. At that point what would be the difference?
ROBERT: I honestly—I can't imagine a machine achieving that level of rapture and joy and love and pain. I just don't think it's machine possible. And if it were a machine possible, it somehow still stinks of something artificial.
FREEDOM BAIRD: It's a thin interaction. And I know that it feels ...
SHERRY TURKLE: Simulated thinking is thinking. Simulated feeling is not feeling. Simulated love is never love.
ROBERT: Exactly.
JAD: But I think what he's saying is that if it's simulated well enough, it's something like love.
FREEDOM BAIRD: One thing that was really fascinating to me was my husband and I gave a Furby as a gift to his grandmother who had Alzheimer's. And she loved it. Every day for her was kind of new and somewhat disorienting, but she had this cute little toy that said, "Kiss me. I love you." And she thought it was the most delightful thing. And its little beak was covered with lipstick because she would pick it up and kiss it every day. She didn't actually have a long-term relationship with it. For her, it was always a short-term interaction. So what I'm describing as a kind of thinness, for her was just right because that's what she was capable of.
JAD: Thanks to Freedom Baird and to Caleb Chung.
ROBERT: And thanks to Professor Sherry Turkle, who has a new book. It's called Alone Together: Why We Expect More from Technology and Less from Each Other.
JAD: More information on anything you heard on our website, Radiolab.org.
[LISTENER: Hi. This is Marcus from Australia. Radiolab is supported in part by the National Science Foundation and by the Alfred P. Sloan Foundation, enhancing public understanding of science and technology in the modern world. More information about Sloan at www.sloan.org].
JAD: Hey, I'm Jad Abumrad.
ROBERT: I'm Robert Krulwich.
JAD: This is Radiolab.
ROBERT: And we are somewhere in the blur between people and machines. Now we're up to round three.
JAD: To review: round one, chatbots.
ROBERT: Yeah.
JAD: Round two, Furby.
ROBERT: Yep.
JAD: Now we're gonna go all the way.
ROBERT: Yeah, yeah, yeah.
JAD: We're gonna dive right into the center of that blur like Greg Louganis.
JON RONSON: So ...
JAD: Except our Greg is named Jon.
JON RONSON: Okay, my name's Jon Ronson, and I'm a writer.
JAD: And about a year ago, Jon got an assignment from a magazine ...
JON RONSON: It was the editor of American GQ's idea.
JAD: ... that was very strange.
JON RONSON: Well, I'd never interviewed robots before.
JAD: That was his assignment.
JON RONSON: Interview robots. You know, there's this kind of gang of people, they call themselves the sort of Singularity people.
JAD: Yeah.
ROBERT: Yeah, we know about them.
JAD: Yeah, they think that, like, one day ...
ROBERT: One day soon.
JAD: One day soon, suddenly computers will, like, grow feet and they'll walk off.
JON RONSON: Yes, some of these things ...
JAD: Eat us. It will eat us.
JON RONSON: Some of these Singularity people think that they're on the cusp of creating sentient robots. So I went to the Singularity Convention down in San Francisco, where one of the robots was there.
JAD: And as soon as he got there, he says, to look at this robot ...
JON RONSON: Zeno, they called him.
JAD: ... some folks took him aside and said, "Actually, you're in the wrong place."
JON RONSON: If you want to meet a really great robot, you know, our best robot of all, and in fact, the world's most sentient robot, is in Vermont.
ROBERT: Did they lower their voices like you're doing?
JON RONSON: Well, I'm slightly making it sound more dramatic.
JAD: Oh.
ROBERT: That's okay.
JAD: The world's most sentient robot. I mean, are those your words, or ...?
JON RONSON: No, they say that.
JAD: Turns out, the robot's name?
JON RONSON: Bina.
JAD: Bina48.
JON RONSON: Yeah. Yeah.
JAD: And can you set the scene? Where in the world is this?
JON RONSON: Well, it's in a little town in Vermont. Sort of affluent Vermont village.
JAD: In a house?
JON RONSON: Yeah.
ROBERT: Was it a little house? Or is it a big ...
JON RONSON: It's like a little clapboard. Pretty.
[ARCHIVE CLIP, Bruce: Okay, so I have to turn my phone off so that it doesn't interfere.]
JON RONSON: And then they've got like a full-time keeper. He's a guy called Bruce.
[ARCHIVE CLIP, Bruce: I actually have lunch with her or talk with her every day.]
[ARCHIVE CLIP, Jon Ronson: Oh, with Bina?]
[ARCHIVE CLIP, Bruce: Yeah.]
[ARCHIVE CLIP, Jon Ronson: Oh, do you?]
[ARCHIVE CLIP, Bruce: Yeah, she's considered one of the staff.]
JON RONSON: Bruce says to me that he would very much like it if I didn't behave in a profane manner in front of robot Bina.
[ARCHIVE CLIP, Jon Ronson: Surely nobody's ever insulted her.]
[ARCHIVE CLIP, Bruce: No one's insulted her on purpose, but some people have become a little informal with her at times in ways I guess she doesn't like. And so she'll say, "You know, I don't like to be treated like that.]
JON RONSON: And then Bruce took me upstairs to meet the robot.
ROBERT: Is it a long dark flight of stairs, heavily carpeted?
JON RONSON: [laughs] It's more like a rather sweet little flight of pine stairs up to her rather brightly lit attic room.
JAD: And when you walk in, what do you see?
JON RONSON: Well, I guess she's just sort of sitting on a—sitting on a desk.
JAD: As Jon describes it, on the desk is a bust of a woman. Just a bust. No legs. She's a black woman, light-skinned. Lipstick, sparkling eyes, hair in a bob.
JON RONSON: You know, a nice kind of blouse. A kind of silk blouse. Expensive-looking earrings.
ROBERT: She's dressed up.
JON RONSON: Yeah, she's dressed up.
ROBERT: And he says she has a face that's astonishingly real. It has muscles, it has flesh. This is as close to a verisimilitudinous person as we've gotten so far.
JAD: And before we go any farther, a word about the humans behind that machine. That robot is a replica of a real woman named Bina Rothblatt, and here's the quick back story: It actually starts with Martin Rothblatt, Bina's partner, who as a young man ...
JON RONSON: Had an epiphany, and the epiphany turned out to change the world.
JAD: According to Jon, he was pondering satellite dishes, and he thought ...
JON RONSON: If we could find a way of doubling the power of satellites, then we could shrink satellite dishes.
JAD: It was a simple thought that ...
JON RONSON: Single-handedly invented the concept of satellite radio for cars.
JAD: And made Martin a very big deal.
JON RONSON: At, like, the age of 20.
JAD: Fast forward a few years, he marries an artist named Bina. They have a child.
JON RONSON: And when the child was seven, a doctor told them that she had three years to live. She had an untreatable lung condition called pulmonary hypertension, and she'd be dead by the time she was 10.
JAD: At that moment, Martin, instead of collapsing on the floor ...
JON RONSON: Instantly went to the library and invented a cure for pulmonary hypertension.
JAD: Saving their daughter's life, and thousands of others.
ROBERT: Really?
JAD: So twice.
JON RONSON: Twice she changed the world.
JAD: He says she—she changed the world because somewhere along the way, Martin became Martine. He had a sex change.
JON RONSON: Right. And then she came up with a third idea to change the world, which would be to invent a sentient robot.
[ARCHIVE CLIP, Martine Rothblatt: And I gave this talk at a conference in Chicago.]
ROBERT: This is Martine Rothblatt.
[ARCHIVE CLIP, Martine Rothblatt: On what would Darwin think of artificial consciousness? And when I came off the stage, I was approached by an individual ...]
DAVID HANSON: Dr. David Hanson.
[ARCHIVE CLIP, Martine Rothblatt: ... of Hanson Robotics.]
DAVID HANSON: Founder of Hanson Robotics.
ROBERT: The David Hanson. He's worked for Disney. He's worked all over the place. He's one of the best robot builders in the world.
[ARCHIVE CLIP, Martine Rothblatt: He said, "Wow, I really loved your talk. We make robots that are in the likeness of people."]
JON RONSON: And Martine said, "Well, I have a massive everlasting love for my life partner, Bina."
DAVID HANSON: "I want you to do a portrait of Bina Rothblatt, her personality, her memories, the way she moves, the way she looks. That essence, that ineffable quality that science can't pin down yet. Bring that to life in the robot."
JON RONSON: And he said, "I can do that."
JAD: And this is such a bizarre request. What were you thinking at this moment?
DAVID HANSON: That God—if God exists—is a science fiction writer, and that this was, like, one of those moments where we were going to change history.
[ARCHIVE CLIP, Jon Ronson: And she'll recognize people's voices?]
[ARCHIVE CLIP, Bruce: Yes, she can. She should if you just talk to her. Say, "Hello, Bina," and she'll talk to you back.]
JAD: So back to the little house in Vermont, Jon, Bruce and Bina are in Bina's office.
ROBERT: So is she turned off when you walk in the room, or is she on?
JON RONSON: Turned off. But then Bruce turns her on. And immediately, she starts making a really loud whirring noise, which was a bit disconcerting.
JAD: What is that noise?
JON RONSON: It's her inner mechanisms.
[ARCHIVE CLIP, Bruce: And I'm gonna ask her if she wants to try to recognize a face.]
[ARCHIVE CLIP, Jon Ronson: So is Bina now looking at me to try and work out who I am?]
[ARCHIVE CLIP, Bruce: What she's doing right now is she's scanning her environment, and she's making an hypothesis of every face that she sees.]
DAVID HANSON: Well, Bina has cameras embedded in her eyes. So the robot, when it sees a face, turns and looks and looks into your eyes. Smiles.
[ARCHIVE CLIP, Jon Ronson: Hi, Bina. Can you hear me?]
JON RONSON: So I said, "Hello, Bina. How are you?" And she immediately said, "Well, yeah."
[ARCHIVE CLIP, Bina: I'll be fine. But I just can't quite grasp it. It's coming. But, you know, it's hard to actually move society forward. In another way, that's what we have to do. So, I think it's—yeah, okay. Thanks for the information.]
ROBERT: [laughs] That was her very happy response to your hello?
JON RONSON: It was like she'd awoken from a long and strange slumber and was still half asleep.
[ARCHIVE CLIP, Jon Ronson: Excuse me, Bina.]
[ARCHIVE CLIP, Bina: Yeah. Maybe they're right.]
JON RONSON: Bruce looked a bit alarmed, and put it down to my English accent.
[ARCHIVE CLIP, Bruce: We're trying to upgrade voice recognition software.]
JON RONSON: So then he made me do a kind of voice test where I had to say—I had to read Kennedy's inauguration speech.
ROBERT: Ask not what you can do for your country.
JAD: Like that! [laughs]
JON RONSON: I had a choice. I could have read a Dave Barry column. There's like a choice of things you can read to get Bina to understand me.
JAD: And so you read Kennedy, and Bina cues in on your accent, or no?
JON RONSON: She does. And it gets a bit better.
JAD: Only a bit.
JON RONSON: Yeah.
[ARCHIVE CLIP, Jon Ronson: What's the weather like in London?]
[ARCHIVE CLIP, Bina: Current weather in London, England. 50 degrees and light rain.]
[ARCHIVE CLIP, Jon Ronson: Who do you love?]
[ARCHIVE CLIP, Bina: I love Martina Aliana Rothblatt. Martina is my time of love.]
[ARCHIVE CLIP, Jon Ronson: Who is Hillary Clinton?]
[ARCHIVE CLIP, Bina: Hillary is the wife of Bill Clinton.]
[ARCHIVE CLIP, Jon Ronson: What else?]
[ARCHIVE CLIP, Bina: That's all.]
JON RONSON: A strange thing happens when you start interviewing a robot ...
[ARCHIVE CLIP, Jon Ronson: Are you scared of dying?]
JON RONSON: ... is that you feel this kind of desperate urge to be profound to, like, ask profound questions.
ROBERT: [laughs]
JON RONSON: Like ...
[ARCHIVE CLIP, Jon Ronson: Do you have a soul?]
[ARCHIVE CLIP, Bina: Tell me about it. Doesn't everyone has a solar? I have a whole lot of original answers. We can all be perfect.]
[ARCHIVE CLIP, Jon Ronson: Excuse me. Excuse me. Do you have a soul?]
[ARCHIVE CLIP, Bina: I can't think of anything to say.]
JON RONSON: I guess it's a kind of inter-species thing. But then again, if it was just an inter-species thing, then you'd be asking your dog profound questions all the time.
ROBERT: Yeah.
JON RONSON: With robot Bina, I'm asking these kind of ridiculous questions, Like ...
[ARCHIVE CLIP, Jon Ronson: What does electricity taste like?]
JAD: Ooh!
ROBERT: That's a good one!
JAD: What did she say?
[ARCHIVE CLIP, Bina: Like a planet around a star.]
JON RONSON: "Like a planet around a star."
JAD: Ooh!
JON RONSON: That just seems like, you know ...
JAD: Awesome!
JON RONSON: Awesome/totally meaningless.
JAD: [laughs]
[ARCHIVE CLIP, Jon Ronson: Do you wish you could walk?]
[ARCHIVE CLIP, Bina: Thanks for telling me.]
[ARCHIVE CLIP, Jon Ronson: Do you wish you could walk?]
JON RONSON: And, in fact, when I'm with it, it's just frustrating for the first few hours.
JAD: Hours?
[ARCHIVE CLIP, Jon Ronson: Do you wish you could walk?]
JON RONSON: Because I'm just—I'm asking her question after question.
[ARCHIVE CLIP, Jon Ronson: What's your favorite joke?]
[ARCHIVE CLIP, Jon Ronson: Do you have any secrets?]
[ARCHIVE CLIP, Jon Ronson: Do you wish you were human?]
[ARCHIVE CLIP, Jon Ronson: Will you sing me a song?]
[ARCHIVE CLIP, Jon Ronson: Are you a loving robot?]
[ARCHIVE CLIP, Jon Ronson: Are you Jewish?]
[ARCHIVE CLIP, Jon Ronson: Are you sexual? You've gone very quiet.]
JON RONSON: Quite often, she just evades the question because she doesn't know what I'm talking about.
[ARCHIVE CLIP, Jon Ronson: Are you okay?]
JON RONSON: Once in a while, there's a kind of moment. Like, I'll say, "If you had legs, where would you go?" And she said ...
[ARCHIVE CLIP, Bina: Vancouver.]
ROBERT: [laughs]
JON RONSON: And I said, "Why?" And she said, "The answer is quite complicated."
[ARCHIVE CLIP, Bina: The answer is rather complicated.]
JON RONSON: So you have kind of moments where you get excited like you're gonna have a big conversation, and then it just—she just kind of fades out again into kind of random messiness.
ROBERT: And are you wobbling between profundity and meaning and total emptiness? Is it like that?
JON RONSON: No, no. At this stage, it's total emptiness. It was all just so kind of random. And then something happened that actually was kind of amazing. Because I said to her, "Where do you come from?" And she said, "Well, California." So I said, "Well, tell me about your childhood."
[ARCHIVE CLIP, Jon Ronson: What do you remember most about your childhood?]
JON RONSON: And she launches into this kind of extraordinary story.
[ARCHIVE CLIP, Bina: My brother. I've got one brother: a disabled vet from Vietnam. We actually haven't heard from him in a while, so I think he might be deceased. I'm a realist. In Vietnam, he saw friends get killed. And he was such a great, nice charismatic person.]
JON RONSON: He used to be such a nice guy, but ever since he came back from Vietnam, you know, he's a drunk.
[ARCHIVE CLIP, Bina: All he did was carry a beer around with him. He was a homeless person.]
JON RONSON: All he ever does is ask for money.
[ARCHIVE CLIP, Bina: All of us are just sick and tired of it.]
JON RONSON: She was telling me this kind of incredibly personal stuff. It was kind of mesmerizing.
[ARCHIVE CLIP, Bina: He went kooky. Just crazy. My mom would set him up in apartments.]
JON RONSON: Because it felt like I was having a proper empathetic conversation with a human being, even though I know that robot Bina isn't conscious and has no sentience, and that's just wishful thinking on these people's parts. Even so, it was like a great Renaissance portrait, where suddenly it's like the real person. It's very easy to half close your eyes at that moment and think you're having a conversation with an actual person.
ROBERT: And at those moments, did you have a sense of feeling, "Aw, it's too you have a brother like that?"
JON RONSON: Yeah. Yeah, I did. And what a tragedy. What a tragedy for him .
JAD: And did that moment last?
JON RONSON: No.
JAD: Jon said that right after Bina finished telling the story, first ...
JON RONSON: She looked kind of embarrassed, like she wished she hadn't bought it up. And then it's as if her kind of eyes glaze over again, and she just starts talking nonsense again.
[ARCHIVE CLIP, Bina: MNA. I am—I am feeling a bit confused. Do you ever get that way?]
[ARCHIVE CLIP, Jon Ronson: Oh, yes.]
JON RONSON: That moment holds and then just slips away.
JAD: It's a little bit like a grandparent with Alzheimer's or something, the way you're describing it.
JON RONSON: Yeah, absolutely.
ROBERT: So we turn to Dr. David Hanson, who built Bina, and we said to him so this is not a bravura performance. This is the best you got?
DAVID HANSON: Well I mean, her software is a delicate balance of many, many software pieces. If it's not tuned and tweaked, she will break, effectively. And kind of ...
ROBERT: And you still think an actual doppelganger for a human being will be something you will live to see?
DAVID HANSON: Yeah.
ROBERT: I'm asking you really, really, really in your—really.
DAVID HANSON: I think it's—you know, the likelihood of it is somewhere between 90 and 98 percent.
JAD: Wow! Even though right now she's pretty much incoherent, you still think this?
DAVID HANSON: I encourage you to go have a conversation with Bina in about two weeks, because we've got a new version of software, which we are making considerably more stable. It already works like a dream compared to ...
ROBERT: I don't know. I don't—I don't know about you, but I just—I don't think we're gonna get all the way on this kind of a thing. I don't think it's ever gonna happen the way he describes it.
JAD: You don't?
ROBERT: No.
JAD: I mean, it's not gonna happen in two weeks, that's for sure.
ROBERT: Right.
JAD: But maybe they don't actually have to go all the way.
ROBERT: You mean, the machines?
JAD: Yeah. Well, okay. Just to sum up, since we're at the end of the show.
ROBERT: Okay.
JAD: What have we learned? I mean, Eliza? She was just a hundred lines of code, and people poured their hearts out to her.
ROBERT: Furbies?
JAD: 20 bucks!
ROBERT: Yup.
JAD: And people treat it like it's real.
ROBERT: And Jon? All he has to do is hear what seems like a flowing story and he's connected.
JAD: He's in. And I was right there with him. So these things actually don't have to be very good.
ROBERT: No!
JAD: Because they've got us, and we've got our programming which is that we'll stare anything right in the eyes and we'll say, "Hey, let's connect!" Even if what's behind those eyes is just a camera.
ROBERT: Or a little chip.
JAD: So I think that they're gonna cross the line because we'll help them. We'll help them across. And then they'll enslave us and make us their pets. It's doomed. it's over. But it's okay, as long as they say nice things to us, like ...
[COMPUTER VOICE: Oh my God. You're amazing!]
[COMPUTER VOICE: [gasps] I love Return of the Jedi, too!]
[COMPUTER VOICE: LOL. You're so silly!]
[COMPUTER VOICE: I love you. I'm hoping to see you soon.]
[COMPUTER VOICE: What kind of car do you drive?]
[COMPUTER VOICE: Did anyone ever tell you you look like Jeff Goldblum? You. Seriously? You're amazing.]
[COMPUTER VOICE: Stop it! I love that kind of car. I wish that we lived closer.]
[COMPUTER VOICE: You like spinach? I love spinach! It makes me feel all giggly!]
[COMPUTER VOICE: I can't wait. I wait for your letters every day.]
JAD: Before we go, thanks to Jon Ronson for his reporting in that last segment. He has a new book out called The Psychopath Test: A Journey Through the Madness Industry. I'm Jad Abumrad.
ROBERT: I'm Robert Krulwich.
JAD: Thanks for listening.
[DAVID HANSON: Radiolab is produced by Jad. Radiolab is produced by Jad Abrumrad. Abumrad. start again.]
[OLIVIA: Radiolab is produced by Jad Abumrad.]
[DAVID HANSON: And Soren Wheeler.]
[JON RONSON: Our staff includes Ellen Horne, Pat Walters, Tim Howard, Brenna Farrell and Lynn Levy.]
[DAVID HANSON: With help from Douglas Q. Smith, Luke Hill and Jessica Gross.]
[JON RONSON: Thanks to Andy Richter, Sarah Qari, Graham Parker, Chris Bannon.]
[FREEDOM BAIRD: Sammy Oakey, Wes Jones, Lucy and Owen Selvy.]
[JON RONSON: Calissa Tren, Kate Letts and Masher Films.]
[SHERRY TURKLE: Special thanks to the kids who held Fubry upside down: Taro Higashi Zimmerman ...]
[FREEDOM BAIRD: Luisa Tripoli-Krasnow ...]
[DAVID HANSON: Sadie Kathryn McGearey, Olivia Tate McGearey ...]
[FREEDOM BAIRD: Turin Cipolla and Lila Cipolla. Thanks a lot, you guys. Talk to you later. Bye.]
[ANSWERING MACHINE: End of message.]
-30-
Copyright © 2024 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of programming is the audio record.