Outthinkers

#112—W. Russell Neuman: AI's Role in Evolutionary Intelligence

Outthinker Season 1 Episode 112

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 34:20

W. Russell Neuman is Professor of Media Technology at New York University, a founding faculty of the MIT Media Lab, he served as a Senior Policy Analyst in the White House Office of Science and Technology Policy.  

His books like The Digital Difference: Media Technology and the Theory of Communication Effects and his most recent book EVOLUTIONARY INTELLIGENCE: How Technology Will Make Us Smarter are rooted in the view that human intelligence and human communication are intertwined. Language and communication are intrinsically linked to the evolution of human intelligence since the dawn of humankind. As our technologies for communication have advanced over time, so too has the sophistication of our cognitive abilities. As you heard in the highlighted clip, this insight gives us a very interesting glimpse at how human intelligence may next evolve as AI comes into play.   

  • The definition of intelligence at its fundamental roots— and how human intelligence is similar to or differs from machine intelligence. 
  • How human intelligence has evolved as our forms of communication evolved and what this insight can tell us about the next stage of human intelligence in the era of Ai 
  • Why machines and AI aren't going to replace human intelligence, but rather converge with it and complement it--think, compensating for human cognitive biases and weaknesses, not taking over. 
  • The future of intelligence as we know it, and how we are already in what Russ calls the “Revolutionary Intelligence” era. 

__________________________________________________________________________________________
Episode Timeline:

00:00—Highlight from today's episode
1:08—Introducing Russ Neuman+ the topic of today’s episode
3:32—If you really know me, you know that...
4:12—What's your definition of strategy?
5:04—What is your definition of intelligence?
8:08—  Four leaps in human intelligence: language, land, leverage and literacy?
13:11—What is the name of our current revolution?
15:42—The regulation of AI
17:46—How machines can help with shortcomings of human cognition
19:41—How can a strategist Use AI as a compensatory tool?
20:52—How can AI tame our natural human biases?
22:17—What are the scenarios we should be worried about with AI?
25:31—How do you see human machine communication evolving?
27:28—Are we already digital beings?
30:14—How many foundational models will there be?
33:01—How can people follow you and continue learning from you?

__________________________________________________________________________________________

Additional Resources:

Book site: Evolutionary Intelligence How Technology Will Make Us Smarter 
LinkedIn: https://www.linkedin.com/in/profwrussellneuman/

Thank you to our executive producer Zach Ness, our producer Nazanin Homayoun Jam and our editor James Pearce. If you enjoyed this episode, please follow, download, and subscribe. I’m your host, Kaihan Krippendorff—thank you for listening.

Follow us at outthinker.com/podcast

Kaihan Krippendorf: Thank you so much for being here. I have loved reading your book, and I'm excited to dive into it. Thank you for being here.
 
 W. Russell Neuman: My pleasure.
 
 Kaihan Krippendorf: So I'd like to warm us up, and I'll say for the audience that we indirectly know each other because you talked with my father at the University of Pennsylvania and both, have sat and thought a lot about this interface between machine and human communication. I'd like to start off by just getting to know you a little bit personally. Ask you to complete the sentence for me. If you know me, you know that.
 
 W. Russell Neuman: I am a curious being and that I welcome your questions, and I welcome when you challenge me because that's a lot more fun than just agreeing with me.
 
 Kaihan Krippendorf: Okay. I'll try to do that. It seems like you've thought many layers deep into your arguments, so but I'll challenge you to take a little bit. This is a a podcast for strategist as well as entrepreneurs are shaping the future. You know, you think a lot about strategy, maybe is about decision making. You think a lot about human and machine decision making, what's your definition of strategy?
 
 W. Russell Neuman: My definition of strategy is recognizing that you're wrong, and open to the question of how wrong you are this time.
 
 Kaihan Krippendorf: I like that. Explain that. You're always wrong in the question of how wrong are you this time. There's no perfect answers.
 
 W. Russell Neuman: You basically argued that any of the strategists with whom you interact are humble, And you need to persuade me of that because most of the folks in strategy pick a particular strategic approach, and then they become advocates for that approach. And my view is that advocacy has to be highly conditional, and the best strategists are saying, this is my strategy And as conditions change, you're gonna watch my strategy change very quickly. And rather than being an advocate for a strategy. I'm an advocate for adjusting the strategy to the changing conditions.
 
 Kaihan Krippendorf: I'd love to just open up just with the premise of the book. So that people could understand what that means. What's your definition of intelligence?
 
 W. Russell Neuman: My definition of intelligence is acting in a way that is thoughtfully responsive to the conditions around you so that you can achieve your goals. Evolutionary intelligence is a recognition that human beings talk about being humble are slow, naked, not very fast, And therefore, what, in fact, gives us the capacity to survive when many of the other species with whom we have competed have become extinct is that we are adaptable, the change, the and appropriately in response to the circumstances. So our capacity to clothe ourselves, help the invention of the wheel, and ultimately the invention of machines meant that even though we weren't very strong, we were able to harness the energy of these machines. And so the kind of definition of human survival is the capacity to adjust its environment, to enhance our not only our survival, but our thriving and successfully enjoying the and taking advantage of our environment. And finally and importantly, and that's the key issue we deal with in the book, evolutionary intelligence.
 
 We've now confronted a technology that's gonna make us smarter. And evolutionary intelligence is recognizing that this as a very unique character because rather than just making us stronger or faster or able to communicate over long distances because we figured out what radio waves are is a really important opportunity, which is what leads me to this relatively positive position I take relative to the others in the AI field where very fretful and worried and concerned and wanna put on breaks and guard rails and hold back and maybe even stop the key issue here for me to define AI distinctly from my colleagues is that they say, well, what we want is AGI. Which is well commonly used acronym for artificial general intelligence. And what they really mean is human intelligence. And that I think is a critical mistake, and it's, again, more hubris and lack of a community.
 
 My personal view is it ain't. And so, even the dramatic demonstrations of the target of shortcomings of human judgment documented over years. I mean, go to Wikipedia and say, you know, a cognitive biases and look at the 200 examples of cognitive bias that have so far been collected in Wikipedia. So my notion is that the best role for AI is to penetrate intelligence. Building a system that shows our understanding of our own weakness.
 
 And you can see there's a coherence in each of these arguments be humble and acknowledge the fact that we could use a little bit of advice.
 
 Kaihan Krippendorf: Yeah. We could use a little help. We're not up the optimal decision makers I wanna get to this moment that you're pointing to is already underway, but when when I open your book, I immediately went to I started with chapter 5. I if you could walk back and put this into the historical context, you talk about language, land, leverage, and literacy. As 4 kind of leaps in, you know, human evolution.
 
 And they all correlate well, they seem to all correlate with communication or communication speed. Could you just describe those stages.
 
 W. Russell Neuman: Human beings are a recent evolutionary development. Anthropologists and biologists argue a bit about what you would consider the start of modern humans my rough definition, this is sort of fun to talk about, is if you gave a humanoid from 500000 years ago a shave, and a haircut, put a nice suit on them. They would be and then walk down Waldott Street in Philadelphia, that'd be recognized, well, that's just another human being walking down so that they look close enough to and probably act close enough to us. So I figure maybe, you know, 405 hundred thousand years modern humans have existed. And over that period, given that much of what came up to that was the shared Evolutionary impulses and instincts, sexual, and towards violence that we shared with our primate ancestors So we're there for 400000 years, and we don't know when language started.
 
 We assume that for the first 200000 years, can you and I talking to each other back then would have been mostly gestures, grunts, making sounds like various animals and weather events and maybe a few words that would be nouns, but I'm not sure we, like, comprehended verbs yet. So let's say maybe a hundred thousand years ago we had language, and that gave us the capacity to communicate. And the other element of human adaptability that allows us to survive is collaborative hunting, cooperation. So 1 of the reasons humans are very sensitive to fashion and what other people think about us was that were central to survival. Because if the animals we wanted to kill were bigger than us, stronger than us, faster than us, unless we could collaborate and then use tools as well, we were likely to be instructed grabbing a few tubers out of the ground, a few berries, and maybe a few small animals.
 
 So if you think about it, it was tough back then. And it's fun to think about that the other great maturity of evolutionary years. We weren't yet dealing with any kind of technology, maybe stones a little bit later. So language comes first, and then when did we start settling in I mean, basically, we're hundreds and gatherers, and the question is, well, how long ago after 400000 years long ago, we settled in settlements, and the answer is 10000 years ago. Very recently, that we figured out we probably chased some animals into a a canyon, and them, figured out how to keep them stay there, and figured we could actually plan a few plans rather than just picking up the berries wherever we could find them.
 
 We said, hey. This is neat. And it turned out that this same transition happened after the receiving of the most recent ice age that we figured out how to plant and maintain and water things. So we started coming living together in smaller communities, not just as small families hunting and gathering. And as we gather together, we started to organize ourselves ultimately to build armies And about this time, maybe 6000 years later, we started to develop writing.
 
 But for the bulk of the following years, only the elites had access to reading and writing. It was still a kind of a common language. So we had communications, but not yet writing. So writing and communications who are absolutely right became increasingly important for our capacity to coordinate and work with each other. And so each of these allergies makes a big difference, then comes to the leverage of the industrial revolution.
 
 And each 1 of these transitions was difficult and delicate and mistakes were made. If you think about the industrial evolution that gives us the capacity for everything we've taken for granted today, during the early days of the industrial revolution, the transition from the sailboat and the windmill, the early cities were ugly, and the working conditions were awful. It took us a while to figure out how to how to live with these technologies without them kind of overwhelming us. And my view is that now that we've discovered the next major evolutionary transition, not just machines, but intelligent machines, We're gonna make a few mistakes as well, and hopefully, they will not be existentially challenged.
 
 Kaihan Krippendorf: And then literacy comes after that.
 
 W. Russell Neuman: It's ironic that we learned to write maybe a 3000 years ago. 4000 years is probably the Sumerians were the first. And the average individual on the globe being the average, you know, in family member being able to read and write is probably about a hundred years well into the industrial revolution when mass education became available.
 
 Kaihan Krippendorf: So when did we realize when we were in the industrial revolution?
 
 W. Russell Neuman: I've got fascinated. Our colleague Dan Bell at Harvard, a brilliant sociologist just made this note in passing, introducing AAA book he was writing a preface for. And he said, by the way, we were 100 years into the industrial revolution before anybody actually use the term industrial revolution. So the notion that we don't fully understand the changes around us, we misperceive them, well, we know I understood what our railroad is. I know what a steamboat is, but understanding that we're going through this massive revolution where machines will replace human and animal power.
 
 Took a while to understand its coherence. Now it I think that the digital revolution when we move from analog communications to digital was recognized quite quickly. And you and I and our neighbors all share an instant recognition. There's something going on with the artificial intelligence revolution. So in those cases, I think the drama of what we've discovered because especially, it seems to be so human like we're paying attention to it immediately, and I think that's a very good thing.
 
 Kaihan Krippendorf: So then the reason we call this the out thinkers podcast kind of the theory is to think out requires new language. If we can't solve a problem, then maybe relax the word or language. And you kind of described now that fourth industrial revolution is named hundred years later, and now we make sense, ah, this is what's going on and how these relate to us. What do you call what we're experiencing now?
 
 W. Russell Neuman: Evolutionary in intelligence.
 
 Kaihan Krippendorf: And can you define that for us?
 
 W. Russell Neuman: Well, it's the conclusion it's a recognition that what we've invented has such positive potential if we're not so fearful of it that we destroy it. If you think about the famous attribution of brilliance, to Guttenberg and the printing press. That turns out that the printing press was invented 300 years before, both in China and in Korea, where the attorney said, I don't want that thing around. That's gonna spread information that I can't control, so they wouldn't permit the printing press. So I'm delighted that we are doing the inventing that we're doing, and I'm hopeful that we'll see its positive potential and do the necessary nudging steering and correcting to get it to work possibly for us.
 
 Kaihan Krippendorf: Yes. That's what we wanna go into next to get the positive and the potential negative, the dragons, as you call them, that we want to nudge away from. Sorry. Just because you talked about China, you have this great line that I heard you say, trying to regulate AI is, like, trying to nail Jello on the wall.
 
 W. Russell Neuman: We haven't talked about the technical basis of AI. Let me briefly describe. The concept was embedded in the mid 19 fifties by mathematicians and computer scientists. And for 70 years, they struggled make it work. They had these what are called now.
 
 AI winters when people wouldn't invest because it's not gonna work. You just you're barking up the wrong tree. It turns out 3 things were missing at the beginning to make these technologies work as intended. The first thing is we didn't have the computer power. Which is now called acutely compute.
 
 I want more compute. I like that 1. And the second thing was the appropriate mathematics to deal with these large models with literally billions of parameters And the third was some data to train sufficiently diverse data to train these complex models. Which is now made available by the Internet and the combination of the compute power, the mathematical models, and the data, and nobody quite understood. And when they through all that data and all that compute together, and it started talking at them in ways that were emergent and not exactly what they were trained on, but seemed to be making, I don't know, some kind of emergent thinking on their own, it literally scared the inventors.
 
 And so the whole notion that we got fully and can't fully understand any model that has a million or a trillion parameters. But the human brain has somewhere around a hundred billion neurons, and each of those neurons has multiple connections. So there's something like a trillion connections going on in a typical human brain. So we're now talking about technologies that are trying to understand the environment around us that are clearly almost as complex as the human brain.
 
 Kaihan Krippendorf: Fascinating. So then how does that then interact? Could you also point out that our cognitive systems have not changed in 400000 years. Just walk us I guess, a kind of 2 questions I wanna ask, and you can decide how you wanna follow them 1 is to walk us through your argument of what are the shortcomings of human cognition. And how machines can help us address those?
 
 And what does it look like? What do you foresee human and machine collaborative thinking look like?
 
 W. Russell Neuman: So the promise, I think, of this concept of evolution where we use intelligent systems to compensate for our misunderstanding of the environment around us. Based on the fact that I could be wrong on this 1. I think we understand pretty well what the cognitive biases are. So we are hopeful thinkers. We wanna believe what reinforces what we already believe.
 
 We're resistant to changing our mind. And this is we're talking about a little bit. We're more, and this is something that strategists deal with among their colleagues all the time. We're more alert to the downside than the upside. We are risk genetically risk averse.
 
 And if you think about running around the jungle and you make a mistake about recognizing a competitor, you die. You make a mistake and not see that a berry was a berry tree was there that you could have eaten. Well, you know, dye, dismissed some calories. So it's in the interest of evolutionary human beings running around the jungles to say, be especially attuned to risk. Especially existential risk.
 
 And so now it turns out that, well, you really wanna balance between those 2, and there's no reason that computers can't be programmed to try to come up with an estimate that says, what's the optimal mix? Of avoiding the downside and exploring the upside.
 
 Kaihan Krippendorf: So just to help us visualize that because, like, walk us through a, like, hypothetical scenario. So there's a strategist, and they have to make an important decision. Right? And they have to decide what they're gonna approach. How does this AI, which you talk about being a compensatory partner.
 
 Could you explain that dialogue just to help us illustrate it.
 
 W. Russell Neuman: Okay. So the strategist is trying to sell an idea, and the audience for the strategist is skeptic goal. And what you wanna do is say, okay. Let's look at a dozen equivalent scenarios and say under what in 1 of those very different scenarios, will a strategy be successful? And if you can make the case that it's likely to be successful under 10 of the 12 scenarios.
 
 What the audience is doing is thinking about those 2 where it's not gonna work. Because they are thinking, I don't know, I see trouble here, and they're attracted to that. And so then and it's a reasonable question to say, okay. What are the probabilities that each of those scenarios is likely to be the conditions that we're gonna be working under in the near term future.
 
 Kaihan Krippendorf: Got it. Got it. And then I guess, like, when we look at the biases in the decision makers, so the strategist is trying to reach his or her confidence of a hypothesis, what the solution is, you talk about there are these multiple biases that they may not be aware of, the commitment bias, the mood congruent memory bias, availability heuristic. And so what would that look like with this evolutionary intelligence coaching someone through those.
 
 W. Russell Neuman: Okay. The the availability bias is the fancy name for the classic joke about the drunk looking under the streetlight for his keys. It knows well that the skis were lost somewhere else, but the light is better under the speak like. And we all laugh at how stupid and I think we kinda characterize the person looking as a drunk because what same human being. We're looking the wrong place because the light is better.
 
 With full knowledge, that's not where the keys are anyway. And I we all like that 1. And the notion is that the assistant maybe some version of Siri on your shoulder says, Hey. Why don't you look a little farther over there and take out your damn cell phone and turn off the light? Right.
 
 Right. Right. Right. Love it.
 
 Kaihan Krippendorf: Great. Great. And so yeah. And for each of these biases, it  can coach just like a good, you know, like a good coach would. So Let's talk a little bit about the potential downsides.
 
 You have this chapter. There'll be dragons. What are some of these scenarios since we're talking about, you know, the, you know, that we the humans gravitate towards protecting against the downside or the risk, what are some of the sound the scenarios that we should be worried about.
 
 W. Russell Neuman: So somebody is gonna approach you over the next week or 2 and say, what's your p, doom, Meaning, the problem p, Perin, the doom, the probability that you think that AI is going to be an existential risk and kill us all. And the people are asked that, and the numbers they come up with are a 0.1 or a 0.05. And there are boomers that are convinced that we've got a problem, the most famous of which is a gentleman named Aleysar Yiddhoski. When asked what's your p to him, he says, yes. He's witty and my concern, and there are many very thoughts full and very well informed computer scientists that are that have a probability that the AI evolving AI systems will be existentially risky is significant, and we should be talking with them, paying attention to them taking their advice.
 
 But I think what they're doing is projecting human characteristics as humans grew up and he evolved in the grasslands and the jungles again, competing with other humans and other species They found that a aggression and a mistrust and a variety of behaviors were useful for survival. Computers didn't grow up in the grass plants and the shingles. Computers didn't wake up 1 morning and say, I wanna eat your lunch. There's no nothing in the evolution of comp computational intelligence that leads to that. So I think that humans are simply dramatically projecting human characters out of computers that aren't there.
 
 Now the reason I say their b dragons is that, well, there are malicious human beings with computer systems. Which have malicious intent, and we'll try to design those systems for their own personal and malicious intent. And so and I say this appropriately, apologetically, the 1 liner I've come up with is that the best defense about a bad guy with an AI system is a good guy with an AI system. And I think we come we should come to appreciate the fact that when people are scared, those systems are very smart, and they would think very fast, faster than we do. And I'm saying we can use our we can design protective AI systems.
 
 So we're also very smart and can serve as very instantaneous, flexible, adaptable, protective systems.
 
 Kaihan Krippendorf: Got it. That that's brilliant. The idea that they didn't evolve in the jungles that they're kind of innate programming or conditioning is not human. I know we're reaching the top of our time with you. There are a couple additional questions I'd love to ask if I can steal a a couple more minutes.
 
 W. Russell Neuman: Certainly.
 
 Kaihan Krippendorf: You're a little bit off the track here. But how do you see human machine communication evolving? You know, there's visual communication, there's auditory communication, there's tactile communication. How do you see that evolved?
 
 W. Russell Neuman: Well, many people have observed this pattern. 1 of the terms I sometimes used to describe it is the convergence initially computers with these gigantic rooms with air conditioning and vacuum tubes. And then they became refrigerator sized computers, the deck PDP elevens, and then they became desktops, then laptops, then handhelds, and increasingly, they're gonna be wearables, and we can imagine a smart glasses, maybe ultimately a smart contact lens so that the capacity to communicate with these systems becomes routine and part of our daily life We protect ourselves from the environment by the clothes we wear, and I think increasingly, that'll be true by using intelligent systems. Think about it. Humans evolved with a capacity to see light and to hear sound, but humans have no capacity to perceive the radio waves that are in their environment.
 
 We're just missing that whole thing. Now there are some birds and some fish that have capacity for magnetic and electric the perception of electricity. Now, virtually, every human being on earth has a sophisticated capacity to interpret the electromagnetic radiations of their environment. And so we walk into a store and we just electronically communicate our identity and our funds and come out with a latte with a flower on the top.
 
 Kaihan Krippendorf: So, you know, what what you're pointing to as well is I feel myself. I'm interacting more often with digital devices. And when we talk about the metaverse, I think people think of you know, a of, you know, a an artificial augmented reality visor. I don't know. This is not a question that I have really properly formulated.
 
 I don't know that I haven't seen you write about or think or so I don't know if you have a response to it. But, you know, all of the sound and psych that we do see, it gets translated into elect difficult signals in our brain, and that's what we sense. Right? So are we already digital beings?
 
 W. Russell Neuman: No. We're already analog beings because the optic connection between your eyes and your brain sends electrical signals, electrochemical signals that are analog wave forms And what you hear, your ears, basically, vibrate and send electrical signals, representing the relative volume at different frequencies. And that's sent to the audio section of your brain that interprets those. So those are very analog signals. So your brain and the synapses reflect a very complicated analog form of electrochemical communications that is not ones and zeros.
 
 So there's a very distinctive difference between how the brain works and how the computers were. So you'll note that I've talked about various ways in which the current sensory systems of the human eyes and ears and vibe sensitive to vibration can interpret the through technology, the environment. What I haven't talked about is direct connection to the brain and as you may know, Elon Musk, characteristically, and several others are trying to say, well, what if we and plan to chip inside the brain? Rake Kurtzweil, among others, have talked about the physical convergence of digital and human analog intelligence and the biomechanical kind of convergence. I'm setting that 1 aside.
 
 I've kinda find a picky. So I'll leave that to others to address, but I think there is such a rich capacity for us to use our current sensory capacities where we use visual and auditory signals to enhance our capacity to appropriate intelligently appropriately respond to our environment. And the next time we say, gee. I think I'm gonna buy a lottery ticket. Siri on our shoulder says, we're more likely to be hit by lightning 7 times than to win that lottery.
 
 And I go, well, maybe I could spend that money and invest in the market instead.
 
 Kaihan Krippendorf: Got it. Got it. Yes. Yes. And what I love about your kind of description of this interaction between human machine is it's in this case, the machine is kind of guiding us and working with us.
 
 It's not dictating and controlling is your vision. Really sorry. We're really I mean, we're I'm gonna sneak in 1 more question. The very, very practical question that a lot of our strategy officers are contemplating. How many foundational models do you think they will ultimately be?
 
 Because we have a foundational model, and then we can tune it with our own data, but how many will there be to build on?
 
 W. Russell Neuman: Let's take a very prominent and iconic model, GPT 4, by OpenAI. It took them a hundred million dollars, a hundred days, 25000 Nvidia chips, each of them, basically, costing somewhere between 5 and 15000 dollars per chip And yeah. What do you what do you do with 25000 chips? Your billing and model. People are worried that the only the big tech guys can possibly make this work, and it's not clear that OpenAI could have possibly gotten done what it has gotten done without very significant investments in the billions from Microsoft. That said, Altman, the CEO of OpenAI, has made it clear that he doesn't think that the next step is simply more chips more data and longer periods of collecting information. There are limits. It's not like Moore's Law.
 
 There's limits because the costs get exponentially large and complicated when you just keep moving up. So the next stages of making these things better is probably gonna be changing the algorithms that are used and changing the whole approach to how the algorithms are used, not just throwing more compute and more data at the same algorithms. So my guess is this it's just a guess. But the answer is probably about a dozen foundational models and that sounds like a lot of competition. Your colleagues may say, well, I'm guessing they're gonna over invest.
 
 And it doesn't might make sense. It might be sustainable. And given that given the fashion the leming fashion of Silicon Valley, maybe people end up investing, and there'll be 2 dozen and And 1 dozen of them will go down the drain, and the other dozen will have sufficient maintenance and updating. And just like data about human marketing, 2 year old data about who's buying what isn't valuable at all. And a model a foundational model that hadn't been tuned for 2 years isn't gonna be worth much.
 
 So my prediction is that the tuning and the updating is gonna become really expensive. And we'll end up with something like a dozen foundational models, and each of them with a thousand fine tuned derivative API base. Son's and daughters.
 
 Kaihan Krippendorf: Got it. That makes sense. That makes sense. Alright. I have so many more questions, but we've reached the top of our time with you.
 
 I want to know how can people continue learning from it. We're certainly gonna strongly suggest that people buy and read evolutionary intelligence, how technology will make us smarter. You teach at NYU. You are associated with MIT media lab. You have a number of 2 videos and podcasts that I've listened to.
 
 What else can people do to follow you and continue to learn from
 
 W. Russell Neuman: basically, you start with the book. It's a starting point, and I think it's 1 of those cases where you'll find a couple of ideas that you can pick up and run with. So I think that's an appropriate starting point. And I the book is an attempt to make what would otherwise be a series of technical statements easily accessible to everybody.
 
 Kaihan Krippendorf: I found it fascinating to read. It was, like, an exciting fiction that's actually true. Opening up with a Terminator 2, judgment day as the at the very beginning. Thank you for writing it. Thank you for the work that you do.
 
 Thank you for sharing it with us here Rus.
 
 W. Russell Neuman: It was my pleasure. Thanks.
 
 Kaihan Krippendorf: Thank you to our guest. Thank you to our executive producer, Karina Reyes, our editor Zach Ness and the rest of the team. If you like what you heard, please follow, download, and subscribe. I'm your host Kaihan Krippendorf. Thank you for think we'll catch you soon with another episode of Out Thinkers.