Outthinkers

#139—Faisal Hoque: AI as a Strategic Partner Alongside Humanity

Outthinker Season 1 Episode 139

Faisal Hoque is recognized as one of the world’s leading management thinkers and technologists. He is an award-winning entrepreneur and innovator, and a #1 Wall Street Journal bestselling author with close to thirty years of cross-industry success.  

Faisal is the founder of SHADOKA, NextChapter, along with a host of other companies, and serves as a transformation and innovation partner for CACI, an $8 billion company focused on U.S. national security.  

Along with his extensive successful career in technology and innovation, and with ten previous bestselling books under his belt, his newest book, Transcend: Unlocking Humanity in the Age of AI, dives into the philosophical and far-reaching implications of the evolution of AI alongside humanity. 

In this episode, we discuss: 

  • The metaphorical evolution of AI from being another technological tool to becoming a strategic partner—working alongside humankind as an extension to it 
  • Faisal’s complementary OPEN and CARE frameworks, which work in tandem to provide clear guidelines for how to both adopt and harness AI 
  • The pitfalls to watch out for, and risks we may encounter as AI becomes even more integrated into society than it is today—as well as how to mitigate these and approach the future with proper guardrails in place 

_______________________________________________________________________________
Episode Timeline:
00:00
—Highlight from today's episode
01:21—Introducing Faisal + the topic of today’s episode
04:03—If you really know me, you know that...
05:36—What is your definition of strategy?
06:37—Purpose as an authentic calling creating value   
08:15—Creating vs. discovering purpose 
09:37—The myth of top-down strategy
11:32—On the book 'Transcend
15:26—Evolving metaphors for AI 
16:50—Humanity's reaction to AI
20:04—The OPEN/CARE framework 
26:08—Creating a human future   
28:35—How can people follow you and continue learning from you?__________________________________________________________________
Additional Resources:
Personal website: faisalhoque.com
LinkedIn: faisalhoque
X: faisal_hoque
Link to book: Transcend: Unlocking Humanity in the Age of AI

Thank you to our guest. Thank you to our executive producer, Karina Reyes, our editor, Zach Ness, and the rest of the team. If you like what you heard, please follow, download, and subscribe. I'm your host, Kaihan Krippendorff. Thank you for listening.

Follow us at outthinkernetworks.com/podcast

Kaihan Krippendorff: Thank you so much for being here. It is great to finally have on the podcast. I know we've been talking for several months about you being on, and it's great to finally have you with us. Thank you.
 
 Faisal Hoque: Thanks for having me.
 
Kaihan Krippendorff: I'm excited to dive into transcend and the evolution of humans and machines and you drawing on your deep expertise and experience and seeing digital transformations and this being another digital transformation, but maybe a much bigger one than at least I have seen in my life. But let's start off with the first two questions I ask all my guests. First, just to get us to know you a little bit more personally, could you complete the sentence for me? If you really know me, you know that.
 
 Faisal Hoque: That I'm a technologist, but I deeply care about humanity.
 
 Kaihan Krippendorff: Where does that care of humanity stem from, do you think?
 
 Faisal Hoque: I think it grew over the years, you know, because as you grow old and you explore and you find out how much commonality you have with each other and how all of our aspiration is the same aspiration. And how we all struggle with the similar kind of highs and lows and kind of survive through adversity and changes and take try to take care of our family at the integrated council in there.
 
 Kaihan Krippendorff: Yeah. We'll dive into that more. I think what's interesting, at least in my superficial thinking of it, and you've thought about it much more, is that in the past, it's been easier to look at technology as a tool for humans. And the and AI is maybe the first technology where the question arises of, are we for it, or is it for us?
 
 Faisal Hoque: Yep. For sure. You know? And I think it exposes humanity at a much bigger way. Than it ever has because I think unlike any other technology as you know.
 
 AI is a very active if I can use that term, active technology versus a passive technology. Right? So it's really is could be our today, it's increasingly becoming our partner. But at some point, it could actually supersede us because it's gonna be far more intelligent than anything we have ever seen in our lifetime.
 
 Kaihan Krippendorff: Yeah. Which makes as you've as you talk about in your HDR piece, it's so important to start with your strategy and understand what you're after. So what's your definition of strategy?
 
 Faisal Hoque: Definitions of strategy is that, you know, how do you create not just the end goal, but how do you create a model that allows you to repeatedly go through a journey? And because, you know, any growth or any transformation is not a destination. Just well, just when you arrived, your kind of kinda obsolete it yourself. Right? So it is the process and the method that you put together that allows you to drive you from your purpose or core ethos or whatever you're gonna call it.
 
 To constantly evolve, you know, kind of be regenerative, right, if I can use that term. So that to me is strategy. Strategy is not just okay. I wanna do this. I wanna, you know and I wanna do that's almost like a project plan.
 
 Alright? You don't that I wanna do a specific task. Task is not a strategy.
 
 Kaihan Krippendorff: So, I mean, there's so much you talk a lot. You write a lot about purpose. Strategy begins with purpose. You talk about the purpose maybe of humans or the characteristics of humans. I have a sense that you've thought a lot about what purpose is and why it's important.
 
 Can you share what your thoughts are?
 
 Faisal Hoque: Sure. Look, I think the purpose is, you know, at the end, you know, purpose is whatever you want it to be. Right? Because it's driven by your authentic calling made that be an individual or what there may that be an organization, and how do you craft that authentic calling into set of activity and set of ethos that generate some sort of a battle. And, you know, I mean, you know this from your work, and I know this from my work as well.
 
 I mean, you know, I mean, every organization is vastly different. I mean, you look at you wanna exceed Patagonia, and you can look at Microsoft, you can look at Apple, and you can look at, you know, the US DOD. Right? So these are very different kind of purposes. And, you know, all of them has these basic fundamental things that we serve Right?
 
 And we're we're but the definition of serving is very different. Right? And we so Purpose is very much, you know, individualistic, unique journey by each organization and, obviously, by each leader because at the end of the day, he's the leader and their team. He's our hard team that defines what that will be. So but at the end of the day, I think all of them are trying to create a value, whatever that value is.
 
 And you could define ravel of what thousand different words as you know.
 
 Kaihan Krippendorff: And this is a little abstract question, I guess. But do you think that purpose is it exists and you discover it, or do you does it not exist and you create it? For an organization?
 
 Faisal Hoque: So I think, you know, so it depends on where the organization is in their journey. Right? So if you look up talk about a startup, right, which you I know you worked with a lot of startups, so did I, and we both build startups so we know this. Right? So sometimes that doesn't exist and you create from scratch.
 
 Sometimes, you know, you reach a point and you have to reinvent yourself. So you have to redefine your purpose. Right? And sometimes, you said, look. Time has changed because whatever reason, geopolitics, economy, technology.
 
 I don't have to kind of reinvent yourself and kind of, you know, kind of be with the time, so you have to define your purpose. But most successful organization don't necessarily lose their core value and core calling and their but what they want to do at four to be to stay relevant changes. Right? That's why I said that strategy is a journey, not a destination because you have to almost develop the core capability. How do you become a strategist?
 
 Not just I have a strategy, but that strategy, for sure, would be obsolete within twenty four, thirty six months time frame.
 
 Kaihan Krippendorff: I wanna level this up in a moment to transcend and to kind of look from the organization to humanity, but just staying with the organization for a bit. What do you find? Because you've worked with so many strategists both in the public sector and the private sector. What do you find or organizations get wrong when they think about strategy? Or what's the biggest myth about strategy?
 
 Faisal Hoque: See, I think, you know, whether it's innovation, a strategy, a transformation, will have this notion that especially if you look at large organizations. Right? I mean, they have this notion that it belongs in a handful of senior level people, you know, and they never cultivate many of these folks never cultivate that you have to make that part of your culture. Strategy got to be part of your culture. I know you write about this a lot and talk about it and advise your client with the same kind of mindset.
 
 And so it just cannot be top down and also have to borrow up. So, you know, in in my practice, you know, when I actually whether it's for my company or for the organization that I'm working with, and I try to create a mechanism with portfolio structure whereby the ideal and strategy is coming from bottom up as much as top down, and then you can kind of rack and stack to see which of those things actually makes sense rather than just saying, okay. Well, this we have the chief strategy officer. And the chief strategy officer drives it. To me, the chief strategy officer is almost like a curator and an enabler, then the strategy is for the company who's coming up with all the ideas.
 
 You have no one person can possibly come up with all the ideas. And I
 
 Kaihan Krippendorff: think that yeah. I think that most of the strategy officer that we interact with, I think they would agree that what they have you hear them often say, if it becomes your strategy as the strategy officer, then you've lost it because it needs to be the organization strategy. So let's level this up then to talk about humanity and technology. Maybe we can view humanity as an organization that has a purpose, has a certain characteristics of which you lay on your book. Just if you don't mind, just kind of lay out your argument for transcend.
 
 Faisal Hoque: Sure. Look. I mean, you know, I mean, 10 transcend is a tenth book, and I've written a lot about, you know, technology and organization and innovation and entrepreneurial mindset. That's another thing. So last year or not last year.
 
 Two years ago, you know, when my last book came up, which was all about reinvention, you know. And I said, you know, I wanted to talk so much more about, you know, that reinvention is going through meaning, it's reinvention of humanity. And I couldn't cover all that in one book. So so we took on this book, you know, we took on the journey with Chancet. And we kind of looked at it, like, four pillars.
 
 You know, there's philosophy. There's humanity, and there's technology, and there's business because they're all intertwined. Right? So it comes from that context. So we started by saying, what is humanity?
 
 Or what which part of humanity are we trying to protect because even as you know, humanity has a lot of good things, and humanity has some really ugly stuff. Right? So which part of humanity are we trying to protect? And is all part of humanity worthwhile protecting? Right?
 
 And where would AI will play a role? And every single person, every single organization, every single government entity has a role to play. Because, you know, what we do individually, you know, simple thing is, like, what we post. Right? You can look at it as a positive post, or we can create a really irrelevant entire community with some crazy post.
 
 So or what we consume. Right? So all of this has an impact. You know? And so we looked at it from that point of view, but we started this very philosophical part of conversation by asking, okay.
 
 How do we define humanity? So I we kinda boiled it down, you know, what I'm saying we would mean by research team, which includes a couple of pretty big philosophers along with technology or business people. And, you know, we talked about that. He can probably boil it down to two things. You know?
 
 One is freedom, you know, because humanity is all about freedom, freedom of choice. Very philosophical. Right? Because it's authentic calling is a freedom. Right?
 
 It's your authentic calling, not my authentic calling. It's your authentic calling. So it's freedom and love in the sense that it's love for you know, humanity itself. It could be love for your craft. It could be love for your calling.
 
 Whatever other the case may be. The definition of freedom the combination rather. Combination of freedom and love makes what true humanity is. I mean, you can use that for positive purposes or you can use it for negative purposes. Right?
 
 So the question is when you throw a technology like AI, which for sure, it's gonna be thousand times more smarter than anything we have ever seen. I mean, we're really creating, like, another species almost. Right? It's a digital species. That's going to be you know, which we have never seen because, you know, when we've been kind of, like, top of the food chain and we're the smartest species.
 
 You're gonna have a species that's gonna be thousand times more smarter because, you know, you and I can talk all day long. But if we fill up one of those nonlanguage model, it's got voice of thousands of people. Right? So all we already, it's more smarter. So when you add that on, what does it do?
 
 I mean, how does our humanity gets manipulated and maybe, you know, transcended in a positive way or enable us to be much better than what we are or take us in a very dark place. Right? Or we get completely controlled. Right? So it comes from that philosophical point of view.
 
 Speaker 0: So you talk about that
 
 Kaihan Krippendorff: there is this underlying metaphor that we hold of when we look at tech technologies, especially with AI. Right? And that we often aren't conscious of it. You know? So talk to us about what are the metaphors that people hold, and what are the implications of those metaphors?
 
 Faisal Hoque: So it changed. Right? Because, you know, in the past, you know, that's why I was saying that digital transformation, we've been kind of going through this for last fifty, sixty years, right, from with the dawn of information needs, computing needs post initial revolution. But what has happened is that it used to, like, okay. We input data and we get data back, or we tell it to do some things and it kind of doesn't.
 
 Right? So it was very passive. So metaphorically, you know, you can't say, okay. Look. That was kind of like a tool.
 
 What is happening with AI is almost you have to look at it like, okay. It's all over just a tool. It's our partner. But it's moving towards a point where it's almost like a entity which is like a child and where the parent because we're trying to grasp how it should think, how it should behave. Pretty soon what will happen, they will become the parent.
 
 And we're just, like, living, which is completely dependent on the set of being that the older human
 
 Speaker 0: Fascinating. How do you think humans will react to that? Well,
 
 Kaihan Krippendorff: you know, because we're outsourcing not to our child. We're outsourcing to as digital being as you say. And I imagine some parents fight that. They wanna be still in control and some accept it or some don't even know that they've accepted it. But how do you think human humanity will react to that eventual epic?
 
 Faisal Hoque: You know, this is the billion dollar question. Right? Because we don't know. You know, if you so depends on who you talk to. Right?
 
 So if you listen to, you know, you all, you know, type of philosopher who will say, we already lost it, and it's gotta be total do and flow. Right? And if you talk to the people who has completely bought into it, I and, you know, they will say, wow. That's great because we don't have we will have so much time. I think it will depend what we do in next three, five, ten years.
 
 You know, in terms of how we use it, how we deploy it as a business model, how we regulate it or not regulate it, you know, how government reacts to it. I'm going off hands in the I mean, you know, the notion of that regulation is a bad thing. It's not true. You know?
 
 We have you know, regulation doesn't necessarily have to stop innovation. I'm all for innovation as you know. Right? So it's not stopping the innovation but is being responsible. Right?
 
 So if you compare AI with other kind of positive and negative impact from technology, like, you can look at nuclear energy as an example, or you can look at pharmaceutical technology as or bio biomedicine as a technology for positive or negative. I mean, it's regulated for a reason. Right? I mean, everybody should not have access to nuclear ours, energy because it could create actually nuclear arsenal. Right?
 
 Same with the pharmaceutical. I mean, we have so much problem with, you know, illegal drug addition and whatnot. We could see the parallel with AI. I mean, it's a very it can be very addictive, you know, and I kind of catch myself from time to time because as I do research or as I try to craft something, you know, I'm increasingly using different kind of AI tool and also in the back, you know, in the enterprise side. Right?
 
 I mean, most of the conversation in the consumer side, thus far, but it's increasingly going on the enterprise side. Where do you stop? You know, how much is two months. Right? So and where the ethical boundary comes in and where does the law comes in?
 
 You know, I spent some time with some of our legislative because of my government work, talking about legislation, and, you know, there's a by pardon, agreement that that, you know, there has to be much better kind of a model you know, than what it is. It's not. And, you know, there's the organizational, like, missed the national history of process and technology that has these kind of, you know, kind of ethical framework that technological framework, but it needs to go much beyond that. Right? Because otherwise, we don't lose it.
 
 Right? So that so you talk to the people who are the l you know, the sending out alarm out there, that that's where they come from. And I think the is the middle ground that we kind of have to follow.
 
 Speaker 0: Yeah. But I like about
 
 Kaihan Krippendorff: the at least my understanding of your approach, and I'm gonna get into this open care framework that you introduced. It is really kinda like a balance. Right? There's, you know, in in Hinduism, you had Krishna, you've got creation, destruction, and you've got you have the ying, you have the yang, you've got and what I understand is we're saying, how can we harness AI for its good, but then how do we also, in parallel, manage and predict its risk. So could you talk a little bit about that?
 
 Because I thought that your OpenCarrier framework was very tangible.
 
 Faisal Hoque: Thank you. Thank you. Yeah. So, you know, I mean and, you know, this comes from doing the work and being a practitioner. Right?
 
 Because I'm really if you look at a large organization or any organization, I mean, you have to constantly innovate. We have to be radically optimized. How do you go and innovate and how do you create how do you look at our possibilities and create different things. And by the way, I'm not a prescriber of that drop everything that you have adapted or learned, you know, whether that's Chorus five factor or whether it's six or whatever it is. It doesn't matter, but you have to kind of all useful tool.
 
 You kinda have to construct your thinking in a sense, you know, you have to be radically open to innovate, but you also have to care enough to protect the humanity. So the care for in an unopened stands for, you know, it you be open to opportunities and you craft, assess where your opportunities are, find out the right partners, create a set of portfolios, etcetera, etcetera. Right? Care comes from actually from a risk management. Right?
 
 So the first letter of care is thinking about catastrophic scenarios that you could create for a particular initiative that you are taking.
 
 Kaihan Krippendorff: Even the verbized it, I think, catastrophes, I think, deserves. I love that.
 
 Faisal Hoque: Catastrophes. Exactly. And, you know, it's an idea of that, look, you're trying to create let's say you're building a HR system, which, by the way, like, you know, Amazon did that in 2018, whereby they were using this tool AI tool to look at different kind of the candidates at the potential higher. And the software was, you know, an algorithm was just pushing out people with particular schools at particular location and pop particular ethnic background, etcetera, etcetera. And it didn't work because it was highly biased because the people who developed a model came from their angle.
 
 Right? That's actually great. Minor and a mute or, you know, a miniscule example of catastrophe. But now think about from a health care. Right?
 
 Imagine, you know, and I often talk about this because of my personal situation. I have a son who's a cancer survivor, and that's why, actually, all the proceeds of the book goes to cancer research is now 22. He was diagnosed at 18. And, you know, he suffers from a particular cancer called multiple myeloma as a blood cancer. And usually, people get that kind of cancer when you are in your forties or fifties or sixties.
 
 Actually, mostly for fifties and sixties. There are very rare cases when you are in that age group. So I was being a researcher and a technologist I really need to open to. Where can I find dataset that's within his age where I couldn't find any? Right?
 
 Colleges said the same thing. Like, look, it's one of the difficulties that we have all these datasets, and we now have bunch of AI tools that does predictive modeling in terms of what should be the patient care model and what should be the diagnostics that we should be using to pinpoint because there's, you know, cancer as you may or may not know. It's very individualistic. It's very specific, you know, DNA type or age such a kind of a disease. So you have to really customize, you know, the patient care, and we couldn't come up with any because the dataset doesn't exist.
 
 So and so imagine if we actually use the dataset that was available, that aren't available, that applicable for fifties and 60, it's not gonna work.
 
 Kaihan Krippendorff: Yeah. I could recommend some, yeah, some treatments with dire consequences. Yeah.
 
 Faisal Hoque: Exactly. So that's where the catastrophe is model comes in because you have to think out of the box in terms of risk, not just being radical optimistic, you also have to be radical pessimistic and then balance it. You know, we're talking about the balance and the then look at it at the middle ground. So it's not all in or all out. It's finding the best of the world these are.
 
 Kaihan Krippendorff: Yeah. Because I felt like every conversation that I have of future of technology, it kind of it just it's it becomes a self reinforcing conversation where you think the future is either gonna be Star Wars or Jetsons. Or, you know, it's, you know, the dark or the light future, and then it just feeds back to your assumptions and then your choices. And so here, I think you've got both the dark and the light working in parallel. Yeah.
 
 Faisal Hoque: And the reality is, you know, Kai, you know, that that that I mean, whether that's inter you know, like, the web technology, whether it's ERP, or whether it's, you know, like, CRM, they have been failing repeatedly because we just most of this implementation goes from the mindset of, well, here's a great technology. Right? We got to use it. Or we go to a we have to wholesale change everything. And in the process of doing that, you lose what your corporate process.
 
 With AI, it's actually even more of a risk because of the fact that it can actively become your partner. Right? In the sense that, you know, the future of the organization is gonna be such that you're just not gonna allocate you know, work to human being, you're gonna allocate work to these agents that are part of your organization. Right? They will be your strategy partner.
 
 They will be your research partner. They meaning these agents, right, three agents. They would be your, you know, warehouse, you know, worker bees that would stop which already does, you know, that that's putting out your boxes.
 
 Kaihan Krippendorff: Yes. Love it. So many more questions, but we're reaching top of our time. Let me read the last sentence of your book transcend, which kinda points to where we might be we could be going, and just ask you to unpack that for us. It I think you already started doing that just now.
 
 If we approach AI with flexibility and utility, If we approach it within the spirit of service, if we detach from what harms and devote ourselves to what heals, well, to quote the English writer, rudyard Kippling, hours is the earth and everything that's in it. Just Where could we be going? And what do you advise that someone keep in mind to create this human future?
 
 Faisal Hoque: Since you met Halhasa dance you know, and we talked about the vote and the, you know, the detached, which comes from very much of a Buddhist philosophy and this middle, you know, that's the middle of the road or middle of mindset. You know, we have to be very devoted to transcending the human potential. Right? And we have to detach ourselves from what could cause harm and take away what the best of humanity. Right?
 
 I mean, humanity gave us lots of great things, of poets, of rudyard Kipling, to Miles Davies, to, you know, Steve Jobs. Right? And humanity also gave us lots of very dark characters. So how we stay devoted to enable more people like that, many more people like, you know, the Steve Jobs and, you know, Miles Davies of the world is where we should focus on. So whether your individual who is influencing and coming up with things or whether your organization that's serving a set of customer or whether you're a government that's trying to increase the quality of life of your citizen with the infrastructure, that should be our diversion, and we should be detached from all the things we try to do on a daily basis that caused nothing but havoc.
 
 Kaihan Krippendorff: And what I also got from what you from your work is also don't fool yourself to think that this is only the choice influence of presidents and philosophers. It is if you are a real estate agent and you are adopt thing, you know, you're coming with your plan on how you're you're influencing the choices you're making are influencing our collective future. Amazing. So I know we only scratched the surface, but how can people continue to learn from you? Certainly, they should get one of your 10 books.
 
 Transcend is the one that we're probably talking about. You've got a great HBR article. You do great work. What how can people continue to connect with you and learn from me?
 
 Faisal Hoque: Look. I mean, as I mentioned, you know, the books are all for canceled proceeds. So please buy any one of them. You'll be helping some poor kid. You know?
 
 So that's that. And, also, you can learn. You can buy the books in audio form and, you know, Kendall or eBook or hard copy on Amazon, Pons and all. Etcetera. My website has Faisal Hock has a plethora of information.
 
 I constantly keep pushing myself to right start as you and I bought now where we're we've been for fast country for many years. You know, simultaneously, you had your column, and then I have my column. So there's stuff like that. And I post stuff, you know, on the LinkedIn almost daily basis, some nugget of microlearning. So you don't need to buy anything or subscribe to anything to get those.
 
 Kaihan Krippendorff: Well, thank you. Thank you for doing all of that, for thinking and working out there, and capturing the insights for us. In such tangible ways, Faisel. And thank you for taking time to sit with us and unpack it a little bit here.
 
 Faisal Hoque: Thank you for having me.
 
 Kaihan Krippendorff: Thank you.
 
 
 


People on this episode