
Jun 1, 2011
Transcript
[RADIOLAB INTRO]
ROBERT EPSTEIN: Hi there.
JAD ABUMRAD: We're gonna start today's program with a fellow named Robert.
JAD: Is it Epstein or Epstein?
ROBERT EPSTEIN: Just think Einstein with an "Ep."
JAD: Okay. [laughs]
ROBERT KRULWICH: That would make it Epstein, I guess.
ROBERT EPSTEIN: That's right.
JAD: And where are we reaching you right now?
ROBERT EPSTEIN: I am in San Diego area.
JAD: Robert Epstein is a psychologist.
ROBERT EPSTEIN: Former editor-in-chief of Psychology Today magazine.
JAD: He's written a ton of books on relationships and love, and he also happens to be one of the world's leading researchers in computer-human interactions. Like artificial intelligence, basically.
ROBERT EPSTEIN: That is correct, yes.
ROBERT: So when did you decide to go onto the computer to get a date?
ROBERT EPSTEIN: [laughs] 2006, maybe. Why do you ask?
JAD: Oh, no reason.
ROBERT: What happened? You were—you had gotten divorced?
ROBERT EPSTEIN: Yeah, I was single at the time. Yeah, I was divorced.
ROBERT: And you decided that you'd try love in all the right places, or what?
ROBERT EPSTEIN: Oh, sure. Well, online dating? Everyone was doing it. My cousin actually convinced me to try it. So I did, and I went online and I looked at photos and I looked at profiles and, you know, and I communicated with various people who were willing to talk to me. And one of the women I was communicating with lived in Southern California, where I do. So I thought that's great because, you know, you want someone to be nearby. And she had a very attractive photo online. And her English was poor, which at first bothered me and then she said well, she's not really in California, she's really in Russia.
JAD: Oh!
ROBERT EPSTEIN: But all four of my grandparents came from Russia, so I thought well, I'll go with it. So I continued to write to her.
[VOICE ACTOR, Robert: Hi, sweet Svetlana. It's very warm here now, and I've been doing a lot of swimming. I've also been writing, doing computer programming.]
ROBERT EPSTEIN: She wrote back to me in very poor English.
[VOICE ACTOR, Svetlana: Hello, dear Robert. Dear mine, I have received your letter. I am very happy.]
ROBERT EPSTEIN: I remember that she liked to walk in parks.
[VOICE ACTOR, Svetlana: Went on walk with the girlfriend and we went and walked in park.]
ROBERT EPSTEIN: And her telling me about her family and her mom.
[VOICE ACTOR, Svetlana: My mom asked me about you today, and we spoke much and long time.]
ROBERT EPSTEIN: They lived in a small apartment. I knew where in Russia they lived.
[VOICE ACTOR, Svetlana: Yours, Svetlana.]
ROBERT EPSTEIN: It felt like we were bonding, for sure.
[VOICE ACTOR, Robert: Hello. I might be able to come to Moscow on Sunday, April 15th, departing Thursday, April 19th. With love, Robert.]
JAD: Oh, so it was getting serious.
ROBERT EPSTEIN: Oh, yeah, of course.
ROBERT: And then what happened?
ROBERT EPSTEIN: Well, two months passed and I began to feel uncomfortable. Something wasn't right.
[VOICE ACTOR, Svetlana: Hello, my dear.]
ROBERT EPSTEIN: There were no phone calls.
[VOICE ACTOR, Svetlana: Dear mine, I am very happy.]
ROBERT EPSTEIN: At some point, I began to suggest a phone call, but there weren't any. But the main problem was I would say something like ...
[VOICE ACTOR, Robert: Did you get my letter about me coming to Moscow in April?]
ROBERT EPSTEIN: Or "Tell me more about this friend of yours that you mentioned," and she did not.
[VOICE ACTOR, Svetlana: Dear mine, I am very glad to your letter.]
ROBERT EPSTEIN: She did not. She was still replying with fairly long emails ...
[VOICE ACTOR, Svetlana: I'm fine. Weather in my city, very bad.]
ROBERT EPSTEIN: ... but they were kind of rambling and general.
[VOICE ACTOR, Svetlana: I think of you always much, and I very much want to see more like you.]
[VOICE ACTOR, Robert: I already gave you some dates for a visit to Moscow, my love. What do you think about that?]
ROBERT EPSTEIN: Then at some point, a little bell went off in my head, finally, and I started to send some emails which, let's say, included random alphabet letters.
ROBERT: Wait a second. So you say, "How—what are you wearing tonight? Are you wearing a dbgggglp?"
ROBERT EPSTEIN: Exactly. And it didn't make any difference.
[VOICE ACTOR, Svetlana: Hello, dear Robert. Your letters do me very happy when I open a letterbox.]
ROBERT EPSTEIN: And that's when I realized Ivana was not a person, Ivana was a computer program. I had been had.
JAD: Wow. So what did you think?
ROBERT EPSTEIN: I felt like a fool. I felt like an incredible fool, especially given my background.
JAD: Yeah.
ROBERT EPSTEIN: That I had been fooled that long. Now I can tell you, now this is something that I have never made public about the other example ...
JAD: Robert went on to tell us that not long after that first incident, he was corresponding with someone ...
ROBERT EPSTEIN: With a woman, I thought.
JAD: ... Who also turned out to be a robot. And he discovered it this time because ...
ROBERT EPSTEIN: The programmer contacted me from the UK and said, "I know who you are. You have not been communicating with a person. You've been communicating with a chatbot."
JAD: Whoa!
ROBERT: You've been now undressed twice by robots.
JAD: So to speak.
ROBERT EPSTEIN: Well, and maybe more than twice.
JAD: Well, how common do you think this is? Do you think that Match.com and all those places are, like, swarming with these bots?
ROBERT EPSTEIN: You know, I bet you they are. That's what you have to understand: there are hundreds of these things out there. There might be thousands.
[COMPUTER VOICE: You're amazing!]
ROBERT EPSTEIN: That's what's coming.
[COMPUTER VOICE: What sign are you? I told my girlfriends all about you.]
ROBERT: So in a world like this ...
[COMPUTER VOICE: You're wonderful!]
ROBERT: ... where we are surrounded by artificial life forms ...
[COMPUTER VOICE: What do you look like?]
JAD: Things can get a little confusing. And in fact, we're gonna do a whole show about that confusion, about the sometimes peculiar ...
ROBERT: Sometimes strange ...
JAD: ... things that can happen when humans and machines collide.
ROBERT: Collide, but don't quite know who's on what side of the road?
JAD: Yeah.
ROBERT: I don't know.
JAD: I'm Jad Abumrad.
ROBERT: And I'm Robert ...
JAD: That was good. That was good. Just go with it.
ROBERT: Okay, I'm Robert Krulwich.
JAD: This is Radiolab.
ROBERT: And we're talking to machines.
[COMPUTER VOICE: You are so special!]
[COMPUTER VOICE: Send me your credit card info?]
[COMPUTER VOICE: I love peppermint!]
JAD: To start things off, let's introduce you to the person who really hooked us on this whole idea of human-robot chitchat.
BRIAN CHRISTIAN: My name is Brian Christian.
JAD: He's a writer.
JAD: Are you Christian?
BRIAN CHRISTIAN: Religiously? No.
JAD: That's not at all related to anything.
ROBERT: What's wrong with you?
JAD: It's his name! But it—no, what's important is that he wrote a book ...
BRIAN CHRISTIAN: Called The Most Human Human.
JAD: Which is all about the confusing things that can happen when people and machines interact.
ROBERT: How did you—this is such a curious thing to get ...
JAD: Yeah. How did you get into this?
BRIAN CHRISTIAN: I played with MS-DOS intently when I was a child.
JAD: Oh, there you go.
BRIAN CHRISTIAN: Yeah.
JAD: DOS is kind of the early version of Windows.
BRIAN CHRISTIAN: I was programming these sort of rudimentary maze games.
JAD: Like a cursor going through a maze?
BRIAN CHRISTIAN: Yeah, basically.
ROBERT: Did this by any chance mean you did not develop best friends?
JAD: [laughs]
BRIAN CHRISTIAN: A lot of my best friends were also into that, yeah.
JAD: Wow!
BRIAN CHRISTIAN: We were not the coolest, but we had a lot of fun.
ROBERT: So there you are, and you just had a—you just had a talent for this?
BRIAN CHRISTIAN: Yeah. I don't know what it was. I mean, I was just—there was something I think fascinating to me that you could take a process that you knew how to do, but in breaking it down to steps that were that explicit, you often learned something about how the process actually works. For me, programming is surprisingly linked to introspection.
JAD: How, exactly?
BRIAN CHRISTIAN: Well, you know, if a computer were a person, you could imagine someone sitting in your living room and you say, you know, "Can you hand me that book?" And it would say, "No, I can't do that because there's a coffee cup on it." And you say, "Okay. Well, pick up the coffee cup and hand me the book." And it says, "Well, I can't do that because now I'm holding the cup. And you say, "Okay, put down the cup, then pick up the book."
JAD: And what you quickly learn, says Brian, is that even really simple human behaviors are made up of a thousand subroutines. I mean, if you really think about it, the book task requires knowing what is a book.
ROBERT: You have to learn all about elbows and wrists.
JAD: How to grab something.
ROBERT: What is a book?
JAD: I already said that.
ROBERT: Oh.
JAD: You need to know about gravity.
ROBERT: If it's a machine, you have to teach it ...
JAD: Physics.
ROBERT: ... everything in the world in order for it to just pick up a [bleep][00:08:36.07] spoon.
JAD: Or a book.
ROBERT: I knew that.
JAD: So now think of that Svetlana bot earlier, okay? Trying to make something that could actually mimic human conversation, kinda sorta. Imagine all the stuff you'd have to throw into that. Okay, English, grammar.
ROBERT: Syntax.
JAD: Syntax.
ROBERT: Context.
JAD: Tone.
ROBERT: Mood.
JAD: Sarcasm.
ROBERT: Irony.
JAD: Adverbs.
ROBERT: Adverbs.
JAD: Turn-taking.
ROBERT: Well, it's not actually as impossible as you'd imagine. This is kind of startling. If you go back to the very early days of software programming in the mid-'60s?
BRIAN CHRISTIAN: 1964-1965.
ROBERT: This was actually done with a little program ...
BRIAN CHRISTIAN: Called Eliza. And it was developed by Joseph Weizenbaum at MIT.
ROBERT: But in Weizenbaum's case, his model was not a Russian hottie. Instead, it was a—well ...
BRIAN CHRISTIAN: Non-directive Rogerian therapist.
JAD: The what therapist?
BRIAN CHRISTIAN: It's a particular school of therapy.
ROBERT: The kind where the therapist basically mirrors ...
SHERRY TURKLE: ... it mirrors what you're saying.
ROBERT: ... what you're saying.
SHERRY TURKLE: What you're saying.
ROBERT: This is Sherry Turkle. She's an anthropologist.
SHERRY TURKLE: At the Massachusetts Institute of Technology.
ROBERT: And she worked with Joe Weizenbaum—or is it Weizenbaum? It's Weizenbaum—at MIT.
SHERRY TURKLE: So if you say, you know, I ...
BRIAN CHRISTIAN: I'm feeling depressed.
SHERRY TURKLE: A therapist says ...
BRIAN CHRISTIAN: "I'm sorry to hear you're feeling depressed."
SHERRY TURKLE: "Tell me more."
BRIAN CHRISTIAN: Joseph Weizenbaum decides, "You know, I think that's an easy enough type of conversation that I can program that into my computer."
ROBERT: And so he writes up a simple little program.
BRIAN CHRISTIAN: Just about a hundred lines of code.
ROBERT: Which does sort of what your therapist does.
BRIAN CHRISTIAN: Where it looks for a keyword in what you're saying.
ROBERT: As in, "I'm feeling depressed."
JAD: Keyword: depressed.
BRIAN CHRISTIAN: Latches onto it, and then basically flips it back to you.
ROBERT: "I'm sorry to hear that you're feeling ...
JAD: Keyword.
ROBERT: ... depressed.
BRIAN CHRISTIAN: Right.
SHERRY TURKLE: It's basically a program that inverts your words. And it's a language game.
ROBERT: So here's what he did. He put a computer on a desk running this little program, and then he invited people to sit down and talk to it. They'd write in something, and it would spit back the keyword. And then you would then repeat and on and on. You wouldn't think this would be very interesting, right?
JAD: Nope.
SHERRY TURKLE: But he found that his students who knew that the program didn't know or understand, who had helped him write the program, sophisticated students, nevertheless wanted to be alone with it.
ROBERT: Wow, they actually helped write the program!
SHERRY TURKLE: Yes!
BRIAN CHRISTIAN: I can read you if you like one of the conversations.
JAD: Yes, please.
ROBERT: This is an actual transcript between Eliza and a young woman.
BRIAN CHRISTIAN: Yeah. "Men are all alike." "In what way?" "They're always bugging us about something or other."
[VOICE ACTOR, Eliza: Can you think of a specific example?]
[VOICE ACTOR, woman: Well, my boyfriend made me come here.]
[VOICE ACTOR, Eliza: Your boyfriend made you come here?]
[VOICE ACTOR, woman: He says I'm depressed much of the time.]
[VOICE ACTOR, Eliza: I'm so sorry to hear you are depressed.]
[VOICE ACTOR, woman: It's true. I'm unhappy.]
[VOICE ACTOR, Eliza: Do you think coming here will help you not to be unhappy?]
[VOICE ACTOR, woman: I need some help. That much seems certain.]
[VOICE ACTOR, Eliza: What would it mean to you if you got some help?]
[VOICE ACTOR, woman: Perhaps I could learn to get along with my mother.]
BRIAN CHRISTIAN: Tell me more about your family.
JAD: Wow!
BRIAN CHRISTIAN: And so it's this ...
JAD: That could go on for hours!
BRIAN CHRISTIAN: Well, and in fact it did.
[VOICE ACTOR, woman: My mother takes care of me.]
[VOICE ACTOR, Eliza: Who else in your family takes care of you?]
[VOICE ACTOR, woman: My father.]
[VOICE ACTOR, Eliza: Your father.]
[VOICE ACTOR, woman: You're like my father in some ways.]
[VOICE ACTOR, Eliza: What resemblance do you see?]
[VOICE ACTOR, woman: Well, you're not very aggressive.]
[VOICE ACTOR, Eliza: What makes you think I'm not very aggressive?]
[VOICE ACTOR, woman: You don't argue with me.]
[VOICE ACTOR, Eliza: Why do you think I don't argue with you?]
[VOICE ACTOR, woman: You are afraid of me?]
JAD: Wait a second. Is the woman in this transcript messing around, or is she really pouring her heart out?
ROBERT: We know that this is a woman who works for Joe Weizenbaum. She's sitting in the office, and she just can't stop talking to it. That's all we know.
[VOICE ACTOR, Eliza: What else comes to mind when you think of your father?]
[VOICE ACTOR, woman: Boys.]
ROBERT: And Weizenbaum is watching all this ...
SHERRY TURKLE: And he first thought it was funny, and then he didn't think it was funny because they were actually having conversations with it.
BRIAN CHRISTIAN: One day, he comes into the office, and ...
SHERRY TURKLE: ... his secretary ...
BRIAN CHRISTIAN: ... is on the computer, divulging her life story to it.
ROBERT: According to Weizenbaum, she even told him to please leave the room so she could be alone with it.
SHERRY TURKLE: And talk to it. And he—he was very upset. Nevertheless ...
ROBERT: When word about Eliza got out ...
BRIAN CHRISTIAN: The medical community sort of latches onto it.
JAD: Really?
BRIAN CHRISTIAN: And says, "Oh, this is gonna be the next revolution in therapy."
[NEWS CLIP: Something new and promising in the field of psychotherapy.]
ROBERT: This is from a newscast around that time.
BRIAN CHRISTIAN: Therapists in, like, phone booths in cities. And you're gonna walk in and put a quarter in the slot and have, you know, half an hour of therapy with this automatic program.
[NEWS CLIP: Computer time can be rented for $5 an hour, and there's every reason to suspect that it will go down significantly.]
JAD: People really thought that they were gonna replace therapists with computers?
BRIAN CHRISTIAN: Absolutely.
JAD: Really?
ROBERT: They did?
BRIAN CHRISTIAN: Absolutely.
ROBERT: Yeah.
BRIAN CHRISTIAN: And it was just this really appalling moment for Weizenbaum of there's something—the genie is out of the bottle, maybe in a bad way. And he does this 180 of his entire career. So he pulls the plug on the program, he cuts the funding, and he goes from being one of the main advocates for artificial intelligence to basically committing the rest of his career to fighting against artificial intelligence.
[ARCHIVE CLIP, Joseph Weizenbaum: [speaking German]]
ROBERT: This is Joseph Weizenbaum interviewed in German just before he died in 2008. It was on the German documentary Plug and Pray.
[ARCHIVE CLIP, Joseph Weizenbaum: [speaking German]]
ROBERT: "My main objection," he says ...
[ARCHIVE CLIP, Joseph Weizenbaum: [speaking German]]
ROBERT: "If the thing says, 'I understand,' that if somebody typed in something and the machine says 'I understand.'"
[ARCHIVE CLIP, Joseph Weizenbaum: [speaking German]]
ROBERT: "There's no one there."
[ARCHIVE CLIP, Joseph Weizenbaum: [speaking German]]
ROBERT: "So it's a lie."
[ARCHIVE CLIP, Joseph Weizenbaum: [speaking German]]
ROBERT: "And I can't imagine that people who are emotionally imbalanced could be effectively treated by systematically lying to them."
SHERRY TURKLE: I must say that my reaction to the Eliza program at the time was to try to reassure him. At the time, what I thought people were doing was using it as a kind of interactive diary—knowing that it was a machine, but using it as an occasion to breathe life into it in order to get their feelings out.
JAD: I think she's right to have said that to him.
ROBERT: You do?
JAD: Yeah, because he says it's a lie.
ROBERT: Well, it is a lie.
JAD: How is it a lie?
ROBERT: Well, because a machine can't love anything.
JAD: Yes, and if you are a sensible human being you know that. And it's sitting right there on the desk. It's not pretending.
ROBERT: Well, these are sensible human beings that were already a little bit seduced. I mean, just go forward a hundred years. Imagine a machine that is very sophisticated, very fluent, very convincingly human.
JAD: You're talking about Blade Runner, basically.
ROBERT: Yeah, exactly. At that point, I think I would require some kind of label to remind me that this is a thing. It's not a being, it's just a thing.
JAD: Okay, but if—here's something to think about: if the machines get to that point—which is a big if—where you'd want to label them, well, you're gonna need a way to know when they've crossed that line and become ...
ROBERT: Mindful.
JAD: Yeah.
BRIAN CHRISTIAN: Yeah, so I should back up for a sec and say that in 1950, they're just starting to develop the computer, and they're already asking these philosophical questions. Like, can these machines think? You know, will we someday be able to make a machine that could think? And if we did, how would we know? And so a British mathematician named Alan Turing ...
JAD: ... proposed a simple thought experiment: here's how we'll know when the machines make it across the line. Get a person, sit him down at a computer, have him start a conversation in text.
BRIAN CHRISTIAN: You know, "Hi, how are you?" Enter. "Good" pops up on the screen.
JAD: Sort of like internet chat.
ROBERT: Yep.
JAD: So after that first conversation, have him do it again and then again. You know, "Hi. Hello. How are you?" Et cetera.
BRIAN CHRISTIAN: Back and forth.
JAD: Then again.
BRIAN CHRISTIAN: Right.
JAD: Over and over. But here's the catch ...
BRIAN CHRISTIAN: Half of these conversations will be with real people, half will be with these computer programs that are basically impersonating people.
JAD: And the person in the seat, the human, has to judge which of the conversations were with people, which were with humans. Turing's idea was that if those computer fakes could fool the human judge a certain percentage of the time ...
BRIAN CHRISTIAN: Turing's magic threshold was 30 percent.
JAD: ... then at that point ...
BRIAN CHRISTIAN: ... we can basically consider machines intelligent.
JAD: Because, you know, if you can't tell the machine isn't human, then you can't say it's not intelligent.
BRIAN CHRISTIAN: Yeah, that's basically—yeah.
ROBERT: You said 30 percent of the time?
BRIAN CHRISTIAN: Yeah. Turing ...
ROBERT: Because the natural number to me would be half, you know? 51 percent would seem to be like the ka-ching moment.
BRIAN CHRISTIAN: Right.
ROBERT: 30 percent? I don't know.
BRIAN CHRISTIAN: Well, 51 percent is actually a horrifying number in the context of the Turing test, because you've got these two conversations and you're trying to decide which is the real person. So if the computer were indistinguishable, that would be 50 percent. You know, the judge is doing no better than chance. So if a computer hits 51 percent ...
JAD: Yeah.
BRIAN CHRISTIAN: ... that means they've out-humaned the human.
JAD: Oh yeah, that is horrifying.
JAD: Now something to keep in mind: when Turing thought this whole thing up?
BRIAN CHRISTIAN: The technology was so new ...
JAD: Computers barely existed.
BRIAN CHRISTIAN: ... that it was sort of a leap of imagination, really.
JAD: But no longer. Robert, bring it!
ROBERT: Can you give me, like, some excitement music here?
JAD: Absolutely.
ROBERT: Good. Because every year, the greatest technologists on the planet meet in a small room with folding chairs, and put Alan Turing's question to the ultimate test!
JAD: [laughs] Really, it's just a couple of dudes, you know, who haven't seen the sun in 10 years in a room. But we do now have this thing called the Loebner Prize, which is essentially a yearly actual Turing test.
[ARCHIVE CLIP, Loebner Prize host: Each judge on our judges' table is going to be communicating with two entities—one human and one program.]
BRIAN CHRISTIAN: The way the stage is set up is you've got the judges at a table on the left on laptops.
JAD: Uh-huh.
BRIAN CHRISTIAN: Then a bunch of giant server-looking machines in the middle that the programmers are fiddling with. And then there's a curtain on the right hand side and we're behind the curtain.
JAD: Brian actually participated in the 2009 Loebner Prize competition, but not as a programmer, as one of the four quote "Confederates."
BRIAN CHRISTIAN: The Confederates are the real people that the judges are talking to.
JAD: Because remember, half the conversations the judges have are with people, half are with computers. And then Brian decided to participate that year because the year before ...
BRIAN CHRISTIAN: 2008, the top program managed to fool 25 percent of the judging panel.
JAD: Pretty close to Turing's number.
BRIAN CHRISTIAN: Exactly. One vote away. And so I felt, to some extent, how can I get involved on behalf of humanity? How can I sort of take a stand?
JAD: [laughs]
ROBERT: That's a modest position for you. "All right machines, please hold your places. And now representing all humans: Brian Christian!"
JAD: Now in terms of what Brian is up against, the computer programs have a variety of different strategies. For example, there was one program in Brian's year that would do kind of a double fake out.
ROBERT: Uh-huh.
JAD: Where it would pretend not to be a person, but a ...
BRIAN CHRISTIAN: ... person who is sarcastically pretending to be a robot.
ROBERT: Oh!
BRIAN CHRISTIAN: People would ask it a simple question and it would say, "I don't have enough RAM to answer that question." Smiley face.
ROBERT: [laughs]
BRIAN CHRISTIAN: And everyone would be like, "Oh, this is such a wise guy. Ha ha ha."
JAD: I want to tell you now about one particular bot that competed Brian's year.
[ARCHIVE CLIP, Rollo Carpenter: Hi, I'm Rollo Carpenter.]
JAD: That's the guy who made it.
[ARCHIVE CLIP, Rollo Carpenter: My program is called Cleverbot.]
JAD: And that's the bot. This is a program that employs a very spooky—is spooky the right word? A very spooky strategy.
ROLLO CARPENTER: You may be surprised to hear that, despite the fact that it's called Cleverbot, it states that it is a bot. It states that it is never a human right there in front of them. Despite those facts, I receive several emails a day from people who believe that actually, they are being connected to humans.
JAD: Oh, like they think they've been tricked?
ROLLO CARPENTER: Yes, tricked into coming to a site that claims to be a bot, when in fact they're talking to humans. That no program could possibly respond in this way. And there is a certain element of truth in that.
JAD: To explain, Rollo Carpenter, like Brian, was one of those kids who was completely obsessed by computers.
ROLLO CARPENTER: I was indeed a computer-y kid.
JAD: And when he was just a teenager ...
ROLLO CARPENTER: Age about 16 or so ...
JAD: ... wrote his first chatbot.
ROLLO CARPENTER: I created a program that talked to me.
JAD: No kidding?
ROLLO CARPENTER: Yes. You typed in something, and it would say something back.
JAD: Though at that time, the responses were essentially ...
ROLLO CARPENTER: Pre-programmed.
JAD: And really simple. Kind of like Eliza. But ...
ROLLO CARPENTER: One evening, I think it was ...
JAD: ... fast forward many years. He is in his apartment. And one night, he says ...
ROLLO CARPENTER: A switch suddenly flipped in my—in my mind, and I suddenly saw how to make the machine learn ...
JAD: On its own. What if, he thought, what if it just started at zero like a little baby? And it would grow in these discrete little increments every time you talked to it.
ROLLO CARPENTER: Right. Basically, the first thing that was said to that program that I created the first version of that night, was said back by it.
JAD: Meaning, if he said to it "Hello," it now knew one thing: the word "Hello," so it would say "Hello" back.
ROLLO CARPENTER: The second thing it said was a choice of the first two things said to it.
JAD: So if the second thing you said was, "How are you doing?" it now knew two things: the word "Hello," and the phrase "How are you doing?" So it could either say hello back again, or "How are you doing?"
ROLLO CARPENTER: The third thing it said was a choice of the first three things, and so on ad infinitum—well, not quite ad infinitum but between 1988 and 1997, a few thousand conversations took place between myself and it and a few of my friends and it.
JAD: He and his friends would sit there and type things to it as a way of teaching it new things, but it was just them, so it was slow going.
ROLLO CARPENTER: So it languished for quite a long time. But then I started working with the internet, put it online.
JAD: Where anyone could talk to it.
ROLLO CARPENTER: Within the next 10 years, it had learned something like five million lines of conversation. Now it is frequently handling around 200,000 requests an hour, and it's talking to more than three million people a month.
JAD: Three million conversations a month. And after each one, Cleverbot knows a little bit more than it did before. And every time you say something to it like, "Hey, Cleverbot. Why am I so sad?"
ROLLO CARPENTER: It is accessing the conversations that millions of people have had in the past.
JAD: Asking itself ...
ROLLO CARPENTER: "What is the best overlap?" "Where is the best correlation?"
JAD: "How do people usually answer this question, why am I so sad?"
ROLLO CARPENTER: That's right.
JAD: And then a response. Cleverbot answers, "Just because." Hmm, all right. "Well, why? There must be a reason why I'm so sad." "Because you have been sitting in the same place for too long."
ROLLO CARPENTER: [laughs]
JAD: Is that—who's saying that, exactly? Where does that response come from?
ROLLO CARPENTER: And the answer is: it is one human being at some point in the past having said that.
JAD: So that is one moment of human conversation from one person.
ROLLO CARPENTER: Yes.
JAD: So it's like I'm talking to a ghost.
ROLLO CARPENTER: You are talking to—its intelligence, if you like, is borrowed from millions of people in the past. A little bit of their conversational knowledge, their conversational intelligence goes into forming your reply.
JAD: Now what's interesting, says Rollo, is that when you start a conversation with Cleverbot, it doesn't really have a personality—or no one personality.
ROLLO CARPENTER: Cleverbot is everything to everyone.
JAD: It's just this big hive, really. But as you keep talking to it, and it's sort of pulling forward from the hive these little ghost fragments of past conversations, stitching them together, a form does kind of emerge.
ROLLO CARPENTER: It reflects the person that it's speaking to. It becomes somewhat like that person.
JAD: Someone familiar.
ROLLO CARPENTER: Already, people have very emotional conversations with it. People have complete arguments with it. And, of course, they try to get it into bed.
JAD: By talking dirty to it?
ROLLO CARPENTER: Yeah.
JAD: Wow.
ROLLO CARPENTER: One thing I can tell you is that I have seen a single person, a teenage girl, speaking for 11 hours with just three 15-minute breaks.
JAD: Whoa! About what?
ROLLO CARPENTER: Everything. The day will come not too far down the road where Cleverbot becomes so interesting to talk to that people will be talking to it all day every day.
JAD: But we're not there yet. Because the same thing that makes Cleverbot so interesting to talk to also can make it kind of ridiculous. For example, in our interview with Brian—he was the first person to turn us on to this program—as we were talking, Soren just sort of suggested, "Well, why don't we just try it right now?"
SOREN WHEELER: You want to try it? You want to talk—you want to tell—say to Cleverbot, "I feel blue?"
JAD: Sure. Yeah. Are you pulling Cleverbot up? Is it just Cleverbot.org or something?
SOREN: Dot.com.
JAD: Dot.com.
SOREN: "I feel ..."
JAD: Can you say, "I feel blue because an asteroid hit my house this morning?"
BRIAN CHRISTIAN: So this is—you've hit on a perfect strategy of dealing with these bots.
JAD: Absurdity?
BRIAN CHRISTIAN: Yes. Well, it's basically saying something that has never been said before to Cleverbot.
JAD: Ah.
BRIAN CHRISTIAN: So it's likely that no one has ever claimed an asteroid hit their house. It's weird enough that it may not be in the database.
JAD: Okay.
SOREN: All right.
JAD: Let's see what it says.
SOREN: It says, "An asteroid hit my house this morning." And Cleverbot says, "I woke up at 1:00 pm this afternoon."
ALL: [laughs]
ROBERT: Well, there we go. It's not quite so clever.
JAD: See? You don't have to worry yet, Krulwich.
JAD: In fact, when I went online to YouTube and watched the Loebner competition that Brian attended?
ROBERT: Uh-huh.
JAD: It turns out none of the computers fooled the judges at all.
ROBERT: None? Any?
JAD: Well, I don't know if none-none, but they did really badly.
[ARCHIVE CLIP, Rollo Carpenter: There were no ambiguities between the programs and computers ...]
BRIAN CHRISTIAN: For me, one of the strange takeaways of thinking so much about artificial intelligence is this feeling of how complex it is to sit across a table from someone and communicate with body language and tone and, you know, rhythm and all of these things. What happens when those conversations are working out well is that we're willing to move the conversation in ways that allows us to be sort of perpetually startling to one another.
ROBERT: That's a good word, "Startling."
BRIAN CHRISTIAN: Yeah. You learn someone through these small surprises.
JAD: Thanks to Brian Christian. His excellent book, which inspired this hour, is called The Most Human Human. Go to Radiolab.org for more info. Thanks also to our actors Sarah Thyre, Andy Richter and Susan Blackwell.
[BRIAN CHRISTIAN: Hi, this is Brian Christian. Radiolab is funded ...]
[COMPUTER VOICE: Hello. I'm a machine. Radiolab is funded in part by the Alfred P. Sloan Foundation, enhancing public understanding of science and technology in the modern world.]
[BRIAN CHRISTIAN: More information about Sloan at www.sloan.org]
[SHERRY TURKLE: Hello, this is Sherry Turkle. Radiolab is produced by WNYC and distributed by NPR.]
[COMPUTER VOICE: Bye-bye.]
-30-
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of programming is the audio record.