Dec 12, 2025
Transcript
[RADIOLAB INTRO]
SIMON ADLER: Okay, after all of that, it is time to finally discuss, Latif, the question ...
LATIF NASSER: Yeah.
SIMON: ... the topic.
LATIF: Okay.
SIMON: The theme of the moment, perhaps.
LATIF: Climate change.
SIMON: No. Nobody cares about climate change, man. Come on.
LATIF: Simon!
LATIF: Hey, I'm Latif Nasser. This is Radiolab, where despite what reporter/producer Simon Adler just said, we here at the show—including Simon—do care about climate change. But we're here today to talk about a different huge, overwhelming thing that we're all in the middle of.
SIMON: I mean, I don't want to put words in your mouth, but what I have been feeling ...
LATIF: Yeah?
SIMON: ... is a general sense of frustration.
LATIF: Yeah. Yeah, for sure.
LATIF: Something that everybody is talking about, but nobody seems to actually understand.
SIMON: You and I have even done interviews together with people on this stuff, right?
LATIF: Which is, of course, artificial intelligence.
SIMON: So much of the coverage about this stuff right now is like this running debate, right? Where you've got people on one side saying, "These AI, you know, they think, they are intelligent, and eventually they'll outsmart and destroy us all."
LATIF: Right.
SIMON: And then on the other side, you've got people being like, "No, they—they aren't actually intelligent. They're just mimicking us, and it's not as big a deal as everyone says."
LATIF: Right.
SIMON: And I—I don't actually know who to believe.
LATIF: Yeah.
SIMON: And I think it's because, like, I don't know what AI is. Like, I don't know how it does what it does under the hood.
LATIF: Yeah.
STEPHEN CAVE: Because we don't know, right? Which is one of the most extraordinary things about, you know, machine learning, AI, is that we don't really know what they are.
SIMON: But after reading countless articles, talking to tech people and scientists, I finally felt like I was getting at that question when I talked to this guy.
STEPHEN CAVE: Stephen Cave. I'm the director of the Leverhulme Centre for the Future of Intelligence.
SIMON: He leads this sort of think tank at the University of Cambridge.
STEPHEN CAVE: And there's about 50 of us now trying to understand these systems, using a really wide range of methods, including tests taken from animal psychology.
SIMON: Tests designed to measure how well a mouse can problem solve.
STEPHEN CAVE: And applying them to AI agents in order to understand well, where are we in the kind of evolutionary cognitive tree of life of AI?
SIMON: And they've actually turned these tests into a sort of competition ...
LATIF: Huh!
SIMON: ... that they call the Animal-AI Olympics.
STEPHEN CAVE: Yes, indeed.
LATIF: Okay! That just sounds fun.
SIMON: Right. Yeah, exactly.
LATIF: Yeah.
SIMON: So to do this, they've created a slightly lower resolution Toy Story-looking digital world.
LATIF: Okay.
SIMON: Or maybe even more accurately, like, if you know the game Minecraft?
LATIF: Oh, yeah, yeah, yeah. Sure, sure, sure.
SIMON: It looks like that. It's this three-dimensional space filled with all these different bright, primary-colored objects.
LATIF: Okay.
SIMON: And then they take these AI, which are running on basically the same kind of engine that powered ChatGPT, and they give these things a little avatar like a hedgehog or a pig or a panda, and then they just sort of place them in this 3D world and say, "There is food in here. Find it."
LATIF: So it has to, like, navigate the digital world to find—I mean, I assume it's not really food but ...
SIMON: It's this green orb that they're looking for.
LATIF: Okay.
SIMON: And I mean, there are walls that they have to, like, figure out how to get around. There are transparent walls.
LATIF: But it's like—it's like physical world problem solving?
SIMON: Absolutely. And I mean, while this is the sort of task that mice or pigeons can pull off pretty easily, for these AI agents ...
STEPHEN CAVE: Things like manipulating objects and understanding gravity, it's a real challenge.
SIMON: Like, they struggle to press a level or perceive an edge.
STEPHEN CAVE: Which any animal can do. Or at least, you know, any mammal, say. And so effectively, these systems don't have the common sense of a mouse, whereas higher reasoning—maths and so on—they can do a hell of a lot better than humans can.
LATIF: That's the Moravec's paradox, right? Like it's like, easy things are hard and hard things are easy.
SIMON: Exactly.
LATIF: Yeah.
SIMON: And, like, we've known this for a long time.
LATIF: Yeah.
SIMON: And it's pretty obvious at this point. But after running all of these AIs through this thing dozens, hundreds of times, what Stephen has seen over and over is that ...
STEPHEN CAVE: They have a completely different profile of capabilities and skills than any animal.
SIMON: ... they are not like us.
STEPHEN CAVE: No. I mean, one of its capabilities might be convincing us it's human-like, but it isn't.
SIMON: Well, okay. So then what is it like? I mean, is the AI little tadpoles, or what—what is it?
STEPHEN CAVE: Well, there is one metaphor that some people like to use, and that's the octopus.
SIMON: Hmm!
STEPHEN CAVE: You know, what's wonderful about the octopus is they are phenomenally smart. They can use tools, for example, without being taught. They develop sophisticated tactics of all kinds. There are lots of wonderful octopus escape stories.
SIMON: Well, wait, because that doesn't sound like AI at all.
STEPHEN CAVE: Um, no. [laughs]
SIMON: Then why this metaphor?
STEPHEN CAVE: Well, it's helpful not because AIs are like them, but because in a way it really shows how different intelligence can be.
SIMON: Okay.
STEPHEN CAVE: I mean, octopuses, their intelligence is distributed through their tentacles.
SIMON: He says, you know, we and all mammals have this one, central brain. But octopuses, they have nine little brains: one in the center and then one in each limb.
STEPHEN CAVE: So their tentacles can function much more independently, which is how they manage to have eight of them all doing, like, clever things all at once. And, you know, this kind of intelligence is fundamentally alien to us.
SIMON: Hmm!
STEPHEN CAVE: And that's a good way of looking at AI: alien. Profoundly alien.
SIMON: Which on the one hand makes this thing feel sort of unknowable, impossible to understand. But then on the other, while it is alien, it did not evolve in some far-off galaxy, or even the depths of the ocean.
LATIF: Right.
SIMON: Like, this is an alien we created year by year, transistor by transistor. And so this is what we're doing today. We are going to trace the evolution of this alien in our midst, this alien that we designed, in the hopes, at least, of, like, coming to some deeper understanding of what it actually is today. And then maybe, if we're lucky, that will give us some insight into this thing we are all almost certainly going to have to face off with at some point or another. So ...
LATIF: This is great. Like, I feel like we all need this—we all need this explainer.
SIMON: Great. Fill your glass, because here we go.
SIMON: Hey, you guys can hear me?
TERRY SEJNOWSKI: Yes, I can hear you, Simon.
SIMON: Hello, Terry, how are you?
TERRY SEJNOWSKI: Very good, thank you.
SIMON: Sorry for the slight delayed start here. Some classic technical difficulties, you know?
SIMON: So there are—there are a lot of different first contacts ...
LATIF: Yeah.
SIMON: ... we could point to with this alien species, but the most fun place to start that I've found is with this guy Terry Sejnowski.
TERRY SEJNOWSKI: Professor at the Salk Institute for Biological Studies.
SIMON: Who yeah, sort of like the midwife of AI.
SIMON: Is that the helpful way to think of you, or no?
TERRY SEJNOWSKI: Yes. Yes, actually. Well, it's obviously more complicated than that, but that's not a bad analogy.
SIMON: Terry trained as a neurologist. He came up poking probes in monkeys' heads.
TERRY SEJNOWSKI: To try to understand how the brain works.
SIMON: But then in the mid-'80s, he teamed up with some computer scientists, trying to make computers do animal brain-like things like hear and recognize sounds or visuals.
TERRY SEJNOWSKI: But—but it was going nowhere.
SIMON: Okay.
TERRY SEJNOWSKI: Because everything was based on rules at the time.
SIMON: Like all computer programming at this point, it was this incredibly complicated set of, like, if this/then that statements. So if you see this and you see that but you don't see that, then that means this.
LATIF: Right.
SIMON: This sort of web of logic.
LATIF: Right.
SIMON: Which when it comes to recognizing sounds or pictures, was a problem, because ...
TERRY SEJNOWSKI: For each rule, there are, you know, tens of thousands, a hundred thousand exceptions.
SIMON: ... just too many nuances in the rules to hard-code in.
TERRY SEJNOWSKI: And so it was clear that this approach, this way of doing it through rules, was really hopeless. And so, together with my friend and collaborator Geoffrey Hinton ...
SIMON: He started to wonder if there was a different way to tackle this.
TERRY SEJNOWSKI: Learning. And so, with a small group with computers that were puny by today's standards ...
SIMON: They set out to build a machine that could learn. And one of the first things they tried to teach it was how to pronounce English.
TERRY SEJNOWSKI: You know, text to speech in computer science.
SIMON: And amazingly ...
[ARCHIVE CLIP: Demonstration of network learning by Terry Sejnowski and Charles Rosenberg.]
SIMON: ... they have recordings from these early training sessions.
TERRY SEJNOWSKI: Now if you want to learn from experience, you have to have lots of data.
SIMON: And so ...
[ARCHIVE CLIP, Simon: So you ready?]
[ARCHIVE CLIP, Levon: Ooh. Ah. Blah.]
[ARCHIVE CLIP, woman: Okay. Sometimes, but—come on Levon, look at me.]
SIMON: ... they took a transcript of a kid talking, a transcript I had my friend and neighbor Levon reenact.
[ARCHIVE CLIP, Levon: When we walk home from school I go to my grandmother's house, because she gives us candy.]
[ARCHIVE CLIP, Simon: Nice! That's perfect. Are you ready for the next one?]
SIMON: And then what Terry did was give the computer this text, and then also gave it the exact phonemes, like the symbols for the proper pronunciation for those words—no rules, just actual pronunciations. And then said to the computer, "Quiz yourself. Like, go ahead and try, and then compare what you tried to the correct pronunciation."
[ARCHIVE CLIP: First recording, de novo learning.]
SIMON: And here it is.
[ARCHIVE CLIP, computer: Ananaoneone onerunrunununowneeee mmmmmmdahhhhhhhhshnnnnn ahhhhuuuuuuuuuu shawwwwnnnnnnruuuuuuuuuuunnn shshshnnnnnnnnnn shshn garwamwammanmanmanmmmmmm ...]
LATIF: Wow!
SIMON: Right. So it has no idea what it's doing.
LATIF: Yeah, not even close.
SIMON: No. [laughs]
LATIF: It doesn't sound like a baby either. Like, that just sounds, like, glitched out.
SIMON: It's chaos, right? It's like noise, effectively.
LATIF: Yeah.
SIMON: But then ...
[ARCHIVE CLIP, computer: Oneoneoneunununnaaaaa ...]
SIMON: ... as it continued quizzing itself, comparing its output to what it should have said ...
[ARCHIVE CLIP, Levon: When I go to my cousin's I play badminton, all that.]
SIMON: ... slowly ...
STEPHEN CAVE: We could actually hear the learning.
[ARCHIVE CLIP, computer: Noick, wayway mytow mytwow ...]
STEPHEN CAVE: You could hear it figuring out the difference between vowels and consonants.
[ARCHIVE CLIP, computer: One. Un. Bwip. Bup.]
STEPHEN CAVE: And then it would start pronouncing small words. You know, oh ...
[ARCHIVE CLIP, computer: Go to go to gate to my I I we sleeb ...]
STEPHEN CAVE: [laughs] And, you know, it only took a couple of days ...
[ARCHIVE CLIP, computer: When we walk home from school I like to go to my grandmother's house where we—because she gives us candy.]
SIMON: And it was acing it.
[ARCHIVE CLIP, Levon: And we eat there sometimes.]
[ARCHIVE CLIP, computer: We eat there sometimes.]
[ARCHIVE CLIP, Levon: Sometimes we sleep overnight there.]
[ARCHIVE CLIP, computer: Sometimes we sleep overnight there. Sometimes when I go to—go to my cousin's, I get up late.]
[ARCHIVE CLIP, Levon: Badminton, all that.]
SIMON: But the really astonishing thing is that when they gave the program new words and new sentences that it had never seen before ...
[ARCHIVE CLIP, computer: He won't stop jumping or running the bathtub.]
SIMON: ... it pronounced those, too.
[ARCHIVE CLIP, computer: He keeps jumping and running, gets tired. When he goes to bed when he finally gets to sleep.]
STEPHEN CAVE: [laughs] It was phenomenal!
[ARCHIVE CLIP: Sometimes I get to go to bed at 12:30. Sometimes, but most of the times I don't.]
STEPHEN CAVE: What we didn't appreciate back then was that NETtalk was a little bit of 21st-century AI in the 20th century, that this process of learning was the future.
[ARCHIVE CLIP, Levon: Are we done?]
[ARCHIVE CLIP, Simon: We're done. Thank you so much!]
LATIF: Well okay, but, like, what actually happened there? What is it doing? How do you get a machine to learn that?
SIMON: Well, take a baby human. You know, it's born with this clump of gray stuff in its head, which is really a bunch of neurons that are all connected in, like, a random, messy way.
LATIF: Oh, they are connected? I just imagined the baby brain was, like, nothing was connected. It was a blank slate.
SIMON: No, when the baby emerges, the neurons are all connected. They're just not connected in ways that make sense in terms of the world they've just popped into. But then when it gets some input, like it touches something hot, it gets yelled at, it gets cuddled, it starts to strengthen some of these connections and prune others back.
LATIF: Okay.
SIMON: Until you have this just unbelievably complicated network of connections that can recognize patterns in the world around it and, you know, know that this is a square, or if you poke a cat you get scratched.
TERRY SEJNOWSKI: That's right. In the brain, you adapt to your world that you happen to be in by changing the strengths of connections between neurons.
SIMON: And so basically, Terry and others wanted to create some version of that in a machine.
TERRY SEJNOWSKI: Yeah, that—you hit it. The models we were developing, these neural network models, were based on very simplified versions of brain circuits.
SIMON: Okay, but how did you—how did you do that? Like, what is—what is going on under the hood here that allows it to do this?
STEPHEN CAVE: Well, we understand mathematically how they work, and we're making progress now with trying to translate the mathematics into something that humans understand.
SIMON: And so, Latif, here is my best attempt to translate this for us humans.
LATIF: Okay.
GRANT SANDERSON: I mean, so setting aside all of the technical setup on, like, how is it even interpreting the data or what are you inputting ...
SIMON: With the help of this guy, Grant Sanderson.
GRANT SANDERSON: Yeah, I run a YouTube channel that's named 3Blue1Brown. I often talk about math, but math-adjacent things as well.
LATIF: Great.
SIMON: We're just gonna draw, like, a mental image of what one of these networks looks like.
LATIF: Okay, let's go!
SIMON: Now as we all know, these neural nets can do crazily complex things, but for now we are gonna give one a very simple problem.
[ARCHIVE CLIP, Simon: I'm gonna draw a couple of shapes. What shape is this?]
[ARCHIVE CLIP, Levon: A circle.]
[ARCHIVE CLIP, Simon: Nice.]
GRANT SANDERSON: Can we get a computer to see a circle?
[ARCHIVE CLIP, Simon: How about this?]
[ARCHIVE CLIP, Levon: A circle!]
SIMON: A very childlike task.
LATIF: Yeah. Sure.
SIMON: Now first things first, to get an image into the computer, we're gonna chop it up into a bunch of pixels, like a 10 by 10 grid of them.
LATIF: Okay.
SIMON: And we're gonna imagine those pixels as 100 light bulbs—one light bulb for every pixel, and light bulbs that will be on if their corresponding pixel is filled in with ink, and off if their pixel is empty.
LATIF: Okay.
SIMON: So you've got this circle of illuminated bulbs in this grid of bulbs that are off.
LATIF: Okay, I can see it.
SIMON: From there, for reasons that'll make sense in a minute, below that, we're gonna add a smaller grid of 10 light bulbs, and then below that, just one bottom bulb.
LATIF: At the top a hundred light bulbs, and then another layer, ten light bulbs, another layer, one bulb.
SIMON: Exactly. And that final bulb, that is just the answer, the output that when it turns on, says, yes ...
[ARCHIVE CLIP, Levon: Circles.]
[ARCHIVE CLIP, Simon: Circles, that's right.]
SIMON: ... there's a circle here.
LATIF: Okay.
SIMON: But this last bulb, it's a little bit special. It's not like the other bulbs in that it's actually on a dimmer. So it can also answer, like, "maybe a circle" because it could be a square, if it's kind of bright, or, "I'm pretty sure" if it's pretty bright, or if it's all the way on, that means "this is definitely a circle."
GRANT SANDERSON: As a side note, yeah, this feels like quite the challenge where we're torturing the poor audience members here, probably, like, on their drive and not able to allocate their, like, visual cortex to, like, try to visualize all this. But setting aside all of the technical terminology ...
SIMON: There's one last thing to do. We—we have to wire all of these bulbs together so that electricity can, like, flow from that top grid through that middle grid down to that last bulb, which will hopefully turn it on. So we call up an electrician, we tell him, "Go and connect every bulb in the top 100 to every bulb in the middle 10, and then go and connect every bulb in the middle 10 to that final bulb."
LATIF: So literally every bulb is connected to every other bulb, basically.
SIMON: Exactly. So that electricity can flow down from any bulb that's lit up, and kind of cascade through all of them.
LATIF: Got it.
SIMON: And so the electrician starts pulling the wires, soldering, and they say, "I'm done." But the thing about this electrician is they're shit. Like, they do just a terrible job. Some of the wires that they put in are like a strong copper. Others are just twine, so they can't even carry electricity. And so when this is all said and done, this network we get is kind of like a fresh baby brain with just random neurons clumped together.
LATIF: I see. Got it.
SIMON: And so when we do send in an image of a circle into it, into the machine ...
[ARCHIVE CLIP, Levon: Hey, why are you using a microphone again?]
[ARCHIVE CLIP, Simon: To record your voice.]
SIMON: ... lighting up some of the bulbs in that top grid ...
[ARCHIVE CLIP, Simon: What shape is this?]
[ARCHIVE CLIP, Levon: Uh...]
SIMON: ... the electricity passes down through these random connections, from the top to the middle down to the bottom, and in all likelihood ...
[ARCHIVE CLIP, Levon: I know. A rectangle.]
GRANT SANDERSON: It's completely wrong on this.
SIMON: That final bulb might be a little lit up, or half lit up, or just completely off.
LATIF: Okay.
SIMON: Now when a child gets something wrong ...
[ARCHIVE CLIP, Simon: No, what is that?]
SIMON: ... and, like, a parent scolds them, that is altering the connections between the neurons in the brain, strengthening some, pruning others back, right?
LATIF: Right.
SIMON: And that, that is what we want to do with this machine. We want to mess with those wires, the strengths of those connections between the bulbs.
LATIF: Right, right, right.
SIMON: Now we could just go in there and rewire this thing by hand.
LATIF: Yeah.
SIMON: We could pick out the important bulbs, because we know which ones are lit up for a circle, and direct their current through the middle bulbs to that final bulb but, you know, that would take just as long as hard coding it.
LATIF: Right.
SIMON: And so instead, we're gonna give this thing the chance to learn all this, to learn what the connections should be. So when it gives us that first, random wrong answer ...
[ARCHIVE CLIP, NETTalk voice: There is a 12.2 percent likelihood of a circle in this image.]
SIMON: ... we're gonna say, "Bad robot. There is absolutely a circle in this image. Try again."
[ARCHIVE CLIP, NETTalk voice: Okay, I will try again.]
SIMON: But then after that first try, instead of us standing there saying yes or no, we are going to set it up to learn all on its own. We're going to step away and let math be its babysitter, be its teacher. And so this is the moment where we have to dive into the math a bit.
LATIF: Uh, okay?
GRANT SANDERSON: It's not that complicated. It's mostly multiplication.
LATIF: All right. Okay, let's go.
SIMON: First of all, these bulbs in the computer, they're really just numbers.
[ARCHIVE CLIP, NETTalk voice: One, two, three, four, five.]
SIMON: And the wires, you can really just think of them as variables that multiply these numbers ...
[ARCHIVE CLIP, NETTalk voice: X times 2.]
SIMON: ... as they pass through them.
[ARCHIVE CLIP, NETTalk voice: Y times 0.3.]
SIMON: A good wire multiplies the electricity by five or whatever, a bad one divides it in half or even zeroes it out. And that means we can just take this entire array of bulbs and wires and turn it into a giant equation.
GRANT SANDERSON: You know, A times B plus C times D plus E times F. There's some other math strewn in there very artfully and deliberately, but the key here ...
SIMON: Is with a bit of mathematical trickery, this equation can represent the difference between the output it is giving ...
[ARCHIVE CLIP, NETTalk voice: There is a 12.2 percent likelihood of a circle.]
SIMON: ... and the output we want it to give.
[ARCHIVE CLIP, NETTalk voice: There is a 100 percent likelihood of a circle.]
GRANT SANDERSON: And if we think, "Hey, I've got this function, and I want to, like, find a minimum of that ...
SIMON: Like, minimize the difference between your output and the output we want.
GRANT SANDERSON: ... there's a whole field of math that is just built ready to do exactly this kind of thing. This is what calculus is all about. Like, Newton, if he was rising from the grave, would just be like, showing fireworks right now, saying, "Hey, I got this. I know how to do this one."
SIMON: [laughs]
LATIF: So somehow the calculus tells you in math equation form if you're getting closer to the right answer?
SIMON: Yeah. And now don't worry, we're not gonna go into the calculus other than to say we walk away, and the calculus becomes the teacher.
LATIF: Okay.
SIMON: So ...
[ARCHIVE CLIP, NETTalk voice: 12 percent likelihood.]
SIMON: ... after the first wrong answer, the equation says [buzz] no. Machine tries again.
[ARCHIVE CLIP, NETTalk voice: 25 percent likelihood.]
SIMON: And the equation says, "closer." And the machine tries again.
[ARCHIVE CLIP, NETTalk voice: 77 percent likelihood.]
SIMON: And each time it tries, it messes with the wiring, the weight of the connections between the bulbs.
LATIF: Getting it closer and closer to right.
SIMON: Exactly. And what happens over time is that that middle grid of 10 bulbs, their connections back to that top grid are getting tweaked in such a way that it's like they're starting to pick up clues, like maybe it's getting stronger signals from bulbs that are part of a curve. Or maybe it figures out that the corner bulb can't be on for it to be a circle. And, like, the thing is we—we actually don't know. I mean, when people talk about these things being a black box, this is what they mean. It's this middle grid, it's all automated by math. It's picking up something and ...
LATIF: We don't know what the clues are. We just know that they're right, that the clues are—like, they work.
SIMON: Yeah. It's finding some signal that tells it there is a circle-ish thing here. And as it keeps giving answers and the equation keeps telling it whether it's right or not or closer or further away, eventually each of those middle bulbs is receiving the right electricity from the right top bulbs to know if these characteristics of a circle are there. And if they are, they pass that along to the final lightbulb, which will light up if enough of those characteristics are present. And at that point, yeah, our little network here has learned to recognize this circle, which ...
LATIF: That's actually kind of astonishing. That's pretty amazing.
SIMON: It is. But it's only this one circle. And so the important thing is that if you do this process not with just this one circle, but with tens, hundreds, thousands of examples, you know, big circles, little circles, messy circles, circles drawn by you and me, and you have the machine tweak all those different wires for all those different examples, you can then take all of that and do one final actually very simple bit of math. Just average it all together. All of the wire strengths you got from all the examples for wire one get averaged down to one value. All the wire strengths that you got for wire two get averaged down to one value. And if you've done this right, you can then send in any of the drawings it's seen before or new drawings it's never seen—circles drawn by a two year old or a picture of an orange—and it will say, "Yes, there is a circle there."
LATIF: Holy cow!
SIMON: Now that process we just went through can recognize way more sophisticated things than just a shape, like cats or dogs, and, I mean, the only real difference in the model is instead of these three grids we just used, these three layers—you know, an input, a middle and an output—you just add more layers of bulbs in the middle. These multiple middle layers allow the computer to recognize progressively more complicated components of the picture. So, like, the first layer might just find the edges, the second might find textures, the third, forms, the fourth, maybe eyeballs.
LATIF: Because it's like, because everything is made up of building blocks of the layer before it?
SIMON: Yes.
LATIF: Without—crucially, without anyone labeling any of those intermediate—like, it's figuring that out itself.
SIMON: Exactly. And then using the same mathematical reinforcement, it can tune and tweak to get shit right.
SIMON: Okay, wow! Well, I need a drink after all of this to sort of let all this settle in.
GRANT SANDERSON: [laughs] Great.
LATIF: Okay, like, this—I'm—I wish my kids could learn like this. Like, the way they learn is so physical, so emotional. It matters who's saying it, it matters how they're saying it. It matters the tone, it matters all these different things. Like, this is so clean!
SIMON: And, like, crazy fast. I mean, what just took us 10-15 minutes to explain, that all happens in seconds. So it can learn the circle thing at basically lightning speed.
LATIF: But, like, a circle, recognizing a circle is one thing.
SIMON: Sure.
LATIF: And, like, now we're talking, like, actually making—like, making of, you know, a sonnet as if Shakespeare wrote it, that seems like a—that seems like a very wide gulf. It seems like there's still a lot of place to go.
SIMON: For sure. And our little alien is going to have to evolve here.
LATIF: Yeah.
SIMON: But in terms of its architecture of how it does this, it's basically the exact same.
LATIF: Huh!
SIMON: The only real difference is we're shifting its—its focus from recognizing to a slightly different skill. And we're gonna get to that. You want to predict what I'm gonna say next?
LATIF: Right after a quick break?
SIMON: Exactly. Right after a quick break.
LATIF: Latif.
SIMON: Simon.
LATIF: Radiolab.
SIMON: So you asked this question to me before the break. Like, how did this thing evolve from being able to recognize shapes to generate stuff?
LATIF: Yeah.
SIMON: And I posed that very question to Grant Sanderson.
LATIF: Okay.
GRANT SANDERSON: Yeah. Okay, so I would say there's—there's many different ideas at play here.
SIMON: Who again, YouTuber, has thought a hell of a lot about this stuff. And he says the important next step is to realize that yes, you could think of what we did with those circles as having the machine recognize them, or you could say we were asking the machine to predict the answer we wanted.
GRANT SANDERSON: Like, with the circle example, there's two things that it could predict.
SIMON: Circle or not.
LATIF: Okay. So but that's—so it's not anything meaningfully different, it's just like, let's just call everything a prediction.
SIMON: Right. But it becomes important when we're talking about generative stuff.
LATIF: Okay?
SIMON: Like, in the case of language ...
GRANT SANDERSON: Predict what word comes next.
SIMON: So to explain, going all the way back to the '80s, IBM began playing around with these chatbots that you could type to and it would respond.
[ROBOT VOICE: Hello there, how are you today?]
SIMON: And the way it would do what it was doing was it would take every word that you typed in as your question, turn those words into numbers—we're not gonna go into how, because that would take an hour in and of itself.
LATIF: Okay.
SIMON: But turn those words into numbers, send it through this multi-layered set of bulbs. But in this case, those bulbs, those layers it's passing through, they haven't been trained to categorize a sentence. Like, we don't want it to say, "That was a question." Instead, it has been trained to spit out the word that is most likely to come next, to predict the most likely next word.
GRANT SANDERSON: Just one word. Just one—it's not even a word. Also, there's a nuance here between the notion of words and tokens, but excessive nuance.
LATIF: Yeah, but it's like, what is it even basing—like, how is it predicting that? With a circle, you know it's a circle. We know the right answer, we're giving it the right answer. It's calculating back to that right answer.
SIMON: Right.
LATIF: But, like, in a sentence it can go any million number of ways. How can it ever have a right answer to train back to?
SIMON: Well, so what IBM was doing was giving it a bunch of texts—books, transcripts, conversations—feeding that into this machine, and so then the right answer was the most likely word to follow the preceding words.
LATIF: Okay. So it's like—it's just like here's a giant stack of human talking, and in this giant stack what's the most likely thing that would have been said next in this exact scenario?
SIMON: Exactly. That's right. And just one brief aside, because it's sort of fun. I think I have this right, that a word is a big long list of, like, 13,000 numbers.
LATIF: What? A computer has to turn a word, one word, just like one word into 13,000 numbers?
SIMON: Yeah. And so, like, in the way that a pixel value in the circle example is, like, basically a zero or a one, it's like every word is this list of 13,000 numbers.
LATIF: Oh! It's so weird that it, like—that that's—that's the simpler version for it.
SIMON: [laughs] I know, right?
LATIF: Let me turn it into this, like, phone book of numbers.
SIMON: Which is again, like, which—which points to how these things are so not us.
LATIF: Yeah, they're really not us.
SIMON: Not at all.
LATIF: Wow!
SIMON: But they're using us, though. Right? Like, it's our talk that's getting turned into numbers.
LATIF: Huh.
SIMON: And it literally does it one word at a time. So after it's written the first word of its response, it just does the whole process over again. It takes all the words in your question plus the first word it predicted, sends all that through the network again ...
GRANT SANDERSON: And then it just predicts the next word after that.
SIMON: Sends that through those bulbs again.
GRANT SANDERSON: And then the next word after that.
SIMON: Does the whole thing again.
GRANT SANDERSON: And plays the same game over and over and over. And one of the words in its vocabulary is the, like, "end conversation" token. So it—like, it has some notion of when to stop, but the act of stopping is itself just one more prediction. It's—it's one more probability in that big list of things that should happen next.
SIMON: And as I said, this is how they were doing it all the way back in the '80s. And I mean, if you interacted with a chatbot even in, like, the 20-teens, this is the way they were doing it as well.
LATIF: Really?
SIMON: Do you have any recollection of when you first came in contact with one?
LATIF: Ah, God. I feel like it must have been like one of those customer service bots on a website kind of thing.
SIMON: Okay, sure. And I'm sure, not just because it was a customer service experience but because it was an early chatbot experience, it wasn't very good.
LATIF: No, no, no. Terrible. No. Terrible.
SIMON: And a big part of why they were bad was ...
STEVEN LEVY: They had difficulty dealing with longer stretches of text.
SIMON: This is Steven Levy.
STEVEN LEVY: Editor-at-large at Wired.
SIMON: He's been covering this stuff for ...
STEVEN LEVY: Yeah, yeah, yeah. I mean ...
SIMON: ... a long time.
STEVEN LEVY: I published a book in 1992 called Artificial Life.
SIMON: I was two years old, by the way.
STEVEN LEVY: [laughs] Thanks for that.
SIMON: [laughs] Sorry!
STEVEN LEVY: Yeah. Yeah, thanks.
SIMON: And, he says, because it predicted words one at a time and one after the other, the longer the question or the longer the answer, the more likely it was to miss or lose the larger meaning, and so eventually predict a word that just doesn't make sense or is out of place.
STEVEN LEVY: Exactly.
LATIF: Huh.
SIMON: And so just to give one very concrete example to illustrate it, like, the sentence, "What sound does my dog make when I slam the door?" It's like ...
LATIF: [laughs] That's so—I can see why that would be confusing.
SIMON: Right. Like, you have to somehow know that in that sentence, "dog" is really the operative term here.
LATIF: Right. Right.
SIMON: The important noun, it's not "I" or "door."
LATIF: Right. Right. Right.
SIMON: So in 2017 ...
STEVEN LEVY: This guy, you know, Uszkoreit, he—who worked at Google ...
SIMON: Set out to solve this dog/door problem.
STEVEN LEVY: He thought that the thing should be able to figure out, "Oh, this is the most important part of the sentence. This is what I should pay attention to."
SIMON: And now the question becomes, like, how the heck does one go about doing that? And what they figured out was the problem here is we’re giving it one word at a time and we're having it predict one word at a time.
LATIF: Mm-hmm.
SIMON: And what we need to be able to do instead is have it somehow process the sentence as a whole, so that, you know, something at the end of the sentence can sort of feed back on the weight or meaning it gives to something at the beginning of the sentence. And one way that you can just imagine it doing this is that instead of just making a prediction and giving an answer, you need to take in all the information, make a prediction, but then just, like, set that aside, because you're gonna take in all that information again, and then we're gonna send it through again and again and again, each time focusing on a different word in the sentence, generating a different possible prediction before landing on ...
LATIF: Oh my God!
SIMON: ... some final prediction, which God willing would be "bark."
LATIF: It's like the computer simultaneously lives in the multiverse of that sentence where—where each word in that sentence is the most important.
SIMON: Yeah. And, like, I—I've looked at this stuff for months, and I still don't totally understand exactly how a machine does this, but ...
STEVEN LEVY: Well, I mean, something like that. And also, you know ...
SIMON: [laughs] You can say no. You can tell me I've got it wrong.
STEVEN LEVY: Well, I mean—I mean, in the raw sense yeah, that's the idea.
LATIF: Like, the complexity here, you can see it's going through the roof here. Like, where you're like, oh God, this is so much more computing you need to do.
SIMON: Totally. And this was a big barrier for a long time. I mean, that's why these chatbots were almost as bad in the early 2000s as they were in the 1980s. And this is where we get to the next step in the evolution of our little alien friend here, which, as many evolutionary leaps are, was mostly a hardware upgrade. I mean, if you have been following the news about AI at all, you've probably heard this term, GPU.
[ARCHIVE CLIP: GPU is components that go into data centers.]
SIMON: Or the company ...
[ARCHIVE CLIP: Computer chip maker Nvidia.]
SIMON: ... Nvidia ...
[ARCHIVE CLIP: The most valuable company in history.]
SIMON: ... that makes these things.
[ARCHIVE CLIP: Its story, of course, wrapped up in the frenzy around the future of artificial intelligence.]
SIMON: These things—and this company—have been at the center of the conflict between China and the US when it comes to export controls.
[ARCHIVE CLIP: The idea here is for the US to kind of limit the ability for China to catch up when it comes to AI.]
SIMON: And interestingly, what these GPUs, these graphical processing units, were originally designed for was computer games.
GRANT SANDERSON: Video games, things like that.
SIMON: And what they're really good at is just doing a bunch of different math problems all at once.
GRANT SANDERSON: Exactly. It's just—it's just all about multiplying and adding numbers as fast as you can. There's some other things but, like, by and large, like, just do those two things, and we're off to the races.
SIMON: And doing these math problems all at once, which is called "parallel processing," that's exactly what these learning machines needed to do some version of that super complicated multiverse prediction thing we discussed.
LATIF: Sure. Sure. Sure, okay.
SIMON: And so with these GPUs and this new parallelized architecture that Google named a "transformer," all of a sudden, they could get a machine to parse those longer sentences and give at least reasonable answers to more complicated questions.
LATIF: All right.
SIMON: But what really sent these AI chatbots into the stratosphere was a kind of knock-on effect of this parallel processing.
GRANT SANDERSON: Because when you can process everything at the same time in parallel, you can actually train on a lot more material in the same amount of time.
SIMON: And so eventually they just gave it basically the entire internet, almost everything we humans have ever said on the internet, as its training material, and started sending that through this network of light bulbs and wires that was just unimaginably big. Like, to get a sense ...
GRANT SANDERSON: In our smaller example with the circle ...
SIMON: There's something like a thousand-and-some-odd parameters.
LATIF: Right.
SIMON: A thousand or so of those wires.
GRANT SANDERSON: GPT-3, which was kind of dumb by today's standards, but it came out, it had 175 billion parameters.
SIMON: 175 billion things that could be tweaked?
GRANT SANDERSON: Yeah. And many of the ones that we have now, they're trillions of parameters.
SIMON: And as they fed basically all the things we humans have ever said on the internet into this thing ...
GRANT SANDERSON: Throwing way more training examples and way more compute than anyone would reasonably think to do.
SIMON: ... slowly they started to notice ...
GRANT SANDERSON: That with a sufficiently large amount of data on a sufficiently large model run with sufficiently many cycles of training, these new computers do seemingly intelligent things.
SIMON: Now a lot of what I just described was written up in a paper called "Attention Is All You Need." And these findings are really what unlocked these large language models like ChatGPT. And that's all it was really intended for. But ...
STEVEN LEVY: There was a passage in there saying, "We think this can work for images and video." And indeed, that turns out to work.
SIMON: That same basic model of massive parallel processing with tons of input, that could predict the next part of an image or sound.
[ARCHIVE CLIP: The moment civilization was transformed.]
SIMON: And that moment, that realization, is really what triggered the explosive proliferation of artificial intelligence, different kinds, practically different species of AIs that we are living amongst today.
[ARCHIVE CLIP: New artificial intelligence systems.]
[ARCHIVE CLIP: Machines that can teach themselves superhuman skills.]
[ARCHIVE CLIP: Chat GPT-3.]
[ARCHIVE CLIP: GPT-4.]
[ARCHIVE CLIP: Introducing Apple Intelligence.]
[ARCHIVE CLIP: DALL-E.]
[ARCHIVE CLIP: An app called Lensa.]
[ARCHIVE CLIP: ARD.]
[ARCHIVE CLIP: BARD.]
[ARCHIVE CLIP: It's called Midjourney.]
[ARCHIVE CLIP: Text-to-video art.]
[ARCHIVE CLIP: Generated by an AI app.]
[ARCHIVE CLIP: It's crazy. Look at this.]
[ARCHIVE CLIP: So I don't know what AI it is they're using.]
[ARCHIVE CLIP: Yes, it feels like an episode of Black Mirror.]
LATIF: So it's like—it's like all of these different apps doing all of these different things in all these different mediums. They're taking in a huge amount of examples, and then they're using fancy math to basically predict the next word, the next pixel, the next note. And from that, it's, like, generating this whole huge diversity of new stuff.
SIMON: Yeah, basically. And I mean, it—it's also, just as we described, doing something that I don't totally understand, that's more holistic than just looking at the thing that happens next. But it is drawing on the examples it's been given to decide what should happen next, which suddenly sounds not so simple, and that ...
LATIF: It does send you into a spiral, because it's like—it's like, is what I do any different from that, just spewing out, you know, some iteration of everything else I've seen before this?
SIMON: Yeah, but first of all, you're not—you're not pulling from the whole internet, right? Like, you have to depend on just the limited things you've experienced or can even maybe remember.
LATIF: That's fair.
SIMON: And your, like, math is also just way sloppier. It's not as accurate.
LATIF: Yeah.
SIMON: And to that point—and maybe we shouldn't even go here, but there's this one other thing that you can control in these models, which is called the temperature, which is like this final knob you get to tweak on the thing. And so if you have—I think it's if you have the temperature all the way down it will give you the most likely thing to come next. If you turn the temperature up a little bit, though, it then is gonna pick, like, the second or third most likely thing to come next.
LATIF: No! So you can control, like, how precise you want the math? Like, you can—you can say I want it a little stanky?
SIMON: Yeah. Like, there's a little bit of randomness in it, then, that it's then acting upon in what it does next. So maybe you just want the temperature turned up on, like, every third word so that there's this almost spontaneous feeling, serendipitous creation—act of creation that—that comes out of this rigid math.
LATIF: Like, it's like, something startlingly creative might just be ...
SIMON: A less right answer.
LATIF: ... a less right answer. Wow.
SIMON: Yeah. And just by doing that, it's going to keep doing stuff that we are going to get increasingly uncomfortable with.
LATIF: Yeah.
SIMON: Like, right now there is an AI-generated song on the Billboard country charts.
LATIF: Really? I didn't hear about that.
SIMON: But, like, if that's the case, I see no way that eventually a fully AI-generated film won't hit the box office. Like, that—that's just going to happen. But when it happens, it will be only because of all of this math.
LATIF: To me, I think the thing that makes me—it makes me realize is when you see under the hood, what you see is less like something spooky and ethereal.
SIMON: Yeah.
LATIF: Like, there are times when it gets spooky, when, like, there'll be a time, like, I'll be listening to, like, an AI-generated podcast, and then one of the hosts breathes. And I'm like, "Wait, that's so wei—like, it doesn't even need oxygen. Why is it breathing?" And now it's like, oh, because you know that, like, that's just the next statistical thing that would come in that sentence is a breath. That to me—that to me is like—it's much less eerie because you can see where it got it from.
SIMON: Right. But okay, I do have one bit more for you, because I don't know, I—I still found myself wondering how it will feel as these things get better and better, and in particular, what it'll feel like in the moments we sit across from it and it is better than us at something we have spent our lives working on? That it is better than us at something we truly love.
FAN HUI: Yeah. Maybe—maybe people, or my friends tell me, like, "Wow! You are the first professional Go player be famous because you lost the game."
SIMON: [laughs]
FAN HUI: So yeah, that's me.
SIMON: And so I got in touch with this guy.
FAN HUI: Fan Hui. I'm a professional Go player. Three-time European champion.
SIMON: So real quick, Go, it is an ancient Chinese game, considered probably the most complicated board game in the world to teach a computer to play, because of just how open-ended it is. All you really need to know is you are trying to control as much of the board as possible. You go back and forth with your opponent, placing one stone at a time, and you control portions of the board, or "territory," by either, like, cordoning off sections of it or encircling your opponent's stones.
FAN HUI: It's a very simple idea, but it's difficult.
SIMON: Because with such simple rules, there are just this crazy number of ways the game can play out. In fact, folks like to say that there are more possible ways for a Go game to go than there are atoms in the known universe.
FAN HUI: Yes.
SIMON: Anyhow, back to Fan.
FAN HUI: I remember, I discovered Go age, like, six in my school in Xi'an. And I feel something. Oh! This game, I can play. And I progressed very quick. One year after I learned Go, first in my school. I'm number one.
SIMON: Three years after that ...
FAN HUI: I'm in the best team in the province.
SIMON: And not long after that ...
FAN HUI: I stopped my school. I only learned Go game.
SIMON: I mean, for years ...
FAN HUI: Every day, only thing you do is just to play Go game. Twelve, twelve hours.
SIMON: Twelve hours of playing?
FAN HUI: [laughs] Yeah. It's no joke.
SIMON: Around age 15 he went pro, and somewhere along the way, he says, he noticed this almost magical quality of the game.
FAN HUI: Go, for me, it's like a mirror.
SIMON: A mirror?
FAN HUI: Mirror. Yeah. Because when you play, you can see your mind on the board.
SIMON: He says all the choices you make, whether you're aggressive and attack or are patient and waiting, you know, in a sense, how you think, stares back up at you.
FAN HUI: It's a print. Mind print.
SIMON: And your opponent's mind, he says, it's printed there, too.
FAN HUI: So I play with someone, I don't know him, I never talk with him. I play one game, I know him. This is magical.
SIMON: But this mirror of his—well, it was about to get shattered.
FAN HUI: 2015, Demis Hassabis ...
SIMON: A researcher at Google.
FAN HUI: Sent me an email like, "We have some very exciting Go project. Can you go to our office, visit? We will show you our project." I tell, "Yes. Okay, why not?"
SIMON: And what they showed him was this thing called AlphaGo. It was a computer that had learned how to play the game. And they asked him, like, will—will you play against this?
FAN HUI: So I tell, "Okay, we can play together, because I will win. It's just a program! It's a program! What can you do? You can win with me? Never! It's like zero percent chance to win this. Zero percent.
SIMON: And why—why were you so confident?
FAN HUI: Because I know the best program this moment, I can give six-stone handicap.
SIMON: Handicap.
FAN HUI: Handicap. Handicap game.
SIMON: Got it, got it, got it, got it.
FAN HUI: So how you can possible make the technique, make this huge difference in just months? It's impossible.
SIMON: And so a month later in this windowless office room, Fan faced off with this computer and its human stone-placing helper in a best-of-five game match.
FAN HUI: The first game, all the game, I feel good. I think I will win. But end of game ...
SIMON: With just a few stones left to play ...
FAN HUI: Oh, I was stupid! I make some mistake, and I lost my first game, okay?
SIMON: But, you know, he's thinking, "I was sort of arrogant going into this. I was overly confident."
FAN HUI: So next game, I will be careful. I will play more seriously, I will win the game.
SIMON: So the next day, next game, he sits down at the board, starts carefully placing his stones. And it's looking good on the board, but inside his head ...
FAN HUI: I feel something really difficult. Quite difficult. Very difficult.
SIMON: Because ...
FAN HUI: I like fight, but AlphaGo don't fight with me. And if I wanted something Alpha give me very easy.
SIMON: Looking down at the board, he was not able to see his opponent's mind in the way he always had.
FAN HUI: No.
SIMON: There—there was no bravery. There was no subterfuge that he could sense.
FAN HUI: I see AlphaGo won't do this, AlphaGo won't do that. But why he won't do this? You cannot find it. You can't.
SIMON: And so he didn't know how to respond to it. His mind started to race.
FAN HUI: Good move, bad move. Good move, what mean? Bad move, what mean? Good move, what think my teacher? Good move, what think my student? Everybody, all my friends.
SIMON: And he realized that with all these emotional pushes and pulls, that eventually ...
FAN HUI: I will make mistake. But AlphaGo? No. Never. When you think about this, the confidence is crushed. It's crushed. All crushed. And I lost again, very, very badly. And I lost again for third, fourth and the last one.
LATIF: Yeah. Damn.
FAN HUI: But, you know, this experiment, it's really good for me. This is a—this is a moment I really see myself.
SIMON: Really? You think AlphaGo taught you to be more ...?
FAN HUI: Myself. Yeah. I think this is AlphaGo teaching me about that.
SIMON: And why?
FAN HUI: Because I see myself. So it's like AlphaGo teach me that our life, we will always—lost. Lost, lost, lost! Sorry, it's real life. It's our life. I think this is human. This is important for us.
SIMON: I think what he saw in that game as he was losing was kind of what you were saying about seeing under the hood, making AI less spooky.
LATIF: Yeah.
SIMON: Like, he could see it wasn't magic. It was math with no mistakes.
LATIF: Right.
SIMON: And when he saw himself, you know, like, not being the perfect Go player in any given moment, or in every given moment, like, that's what makes him a person, a person who could love something but still lose at it, maybe feel bad about that and then use that feeling to figure out what to do next.
FAN HUI: Today I'm teaching the Go in China with a student.
SIMON: Wait, why are you teaching Go? The computer will always win.
FAN HUI: Yes, yes. But be careful, because I think all you experiment to learn is still useful. So don't worry what will be coming. You can do nothing. Accept it, and just learn.
TOM MULLANEY: Yeah. I get that.
SIMON: Before we wrap this thing up, I wanted to put all of this in front of someone. And not—not an AI person, but somebody with a really wide scope on technology and history. And so I went to this guy.
TOM MULLANEY: Tom Mullaney. I'm a professor of modern Chinese history at Stanford University.
SIMON: I worked with him years back on a story about typing in Chinese, and he's just one of the most thoughtful and informed people I know.
TOM MULLANEY: That means a lot—that means a lot to me.
SIMON: So how—how would you respond?
TOM MULLANEY: Well, I mean, everyday life is, at its core, a study of this awful, amazing, horrifying, never-ending surprise of what it means to be born and live and die as a human. And even if at the end of the day, an AI is orders of magnitude smarter, AI, just by definition, cannot suffer and rejoice and live and die in quite the same way that humans can, in the same way that we cannot live and die and suffer and comprehend and feel the way an octopus can. I mean, the only thing an AGI will be able to do is contemplate, "My goodness, what does it mean to be an AI?" And so I am not worried at all about what AI means with regard to meaning, human identity—what it means to be human—or any of that.
SIMON: That was very beautiful. And while I love that, I'm still, like, but this is gonna mess everything up so badly.
TOM MULLANEY: Yeah. Oh, no, I agree.
SIMON: Okay. Go.
TOM MULLANEY: I mean, this is gonna get weird down to the fabric. But fast forward this, you know, 20, 30 years if we're still around at a sort of climate change level, when another future human is sitting in this fabric-altered world, it will still be a group of humans rejoicing, suffering. Like, it will still be that condition. And so it's kind of a—it's kind of a libera—for me, it's kind of a little bit of a liberatory time. It's a great—maybe we'd get to free up a little bit more space to get back to work thinking about how to be human, because we have not—we have not even come close to solving that issue.
LATIF: Special thanks to Stephanie Yin and the New York Institute of Go for teaching us the game. To Mark, Daria and Levon, to Barbara Svenich. And of course, thank you to Grant Sanderson for his unending patience explaining the math of neural nets to us. Grant is kind of like your favorite math nerd's favorite math nerd. His YouTube channel is 3Blue1Brown. Check it out.
LATIF: This story was reported and produced by Simon Adler, with original scoring and sound design by Simon Adler. Which brings me to the last unsavory thing I have to say, which is goodbye to Simon Adler, who happens to be one of our best reporter/producers here at the show, and also a friend. He's going off to, among other things, pursue his music career. And this was his last episode on staff with us.
LATIF: Chances are, if you list out your favorite episodes from the last 11 years at the show, more than a couple will be his. Could be some of the tech stories he did. He did stories about drones in Ukraine, about content moderation on Facebook. Could be some of the international stories he did. He reported about the hunt for an endangered rhino in Namibia. He did a story about a species of raccoon in the Caribbean island of Guadalupe. He did a lot of stories about democracy as well. Covered a town, Seneca, Nebraska, that voted itself out of existence. He did a story back in 2017 about a New York City Council race where the campaign manager was a little known guy named Zohran Mamdani.
LATIF: Besides being a killer reporter, not to mention composer and interviewer, Simon has also spent so many hours coaching an entire generation and staffers and interns. He's so generous with his expertise and his time, really someone who makes everyone around him better. Anyway, we have been so lucky to have him as part of our nerdy band for 11 years. Check out his band, Windstar Enterprises, on Instagram. That's Simon and another fellow former Radiolabber, Alex Overington. We just—we already miss you, Simon. And good luck out there.
[LISTENER: Oh, you want me to say this? Oh, that's fun! Hi, I'm Cordelia, and I'm from New York City. And here are the staff credits. Radiolab is hosted by Lulu Miller and Latif Nasser. Soren Wheeler is our executive editor. Sarah Sandbach is our executive director. Our managing editor is Pat Walters. Dylan Keefe is our director of sound design. Our staff includes: Simon Adler, Jeremy Bloom, W. Harry Fortuna, David Gebel, Maria Paz Gutiérrez, Sindhu Gnanasambandan, Matt Kielty, Mona Madgavkar, Annie McEwen, Alex Neason, Sarah Qari, Anisa Vietze, Arianne Wack, Molly Webster and Jessica Yung. With help from Rebecca Rand. Our fact-checkers are Diane Kelly, Emily Krieger, Anna Pujol-Mazini and Natalie Middleton.]
[LISTENER: Leadership support for Radiolab's science programming is provided by the Simons Foundation and the John Templeton Foundation. Foundational support for Radiolab was provided by the Alfred P. Sloan Foundation.]
-30-
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of programming is the audio record.