
Outthinkers
The Outthinkers podcast is a growth strategy podcast hosted by Kaihan Krippendorff. Each week, Kaihan talks with forward-looking strategists and innovators that are challenging the status quo, leading the future of business, and shaping our world.
Chief strategy officers and executives can learn more and join the Outthinker community at https://outthinkernetwork.com/.
Outthinkers
#134—Sandra Matz: The Intersection of Human Behavior and AIs Psychological Targeting
Sandra Matz is the David W. Zalaznick Associate Professor of Business at Columbia Business School, in New York, where she also serves as the Director of the Center for Advanced Technology and Human Performance.
As a computational social scientist with a background in psychology and computer science, Sandra studies human behavior and preferences using a combination of big data analytics and traditional experimental methods. Her research, and the topic of this podcast, uncovers the hidden relationships between our digital lives and our psychology with the goal of helping businesses and individuals make better and more ethical decisions.
We dive into some fascinating insights from her January 2025 book, MINDMASTERS: The Data-Driven Science of Predicting and Changing Human Behavior, exploring the concept she coined of “psychological targeting,” a discipline that reveals how our digital footprints expose intimate aspects of our psychology and can be used to shape decisions—from what we buy to how we vote.
In this podcast, we delve into this topic, discussing:
- How digital technologies and AI are giving us unprecedented abilities to understand and target specific people in specific states
- The profound implications, for better or worse, of this capability
- Some exciting—and concerning—examples of what this might look like, from diagnosing mental health to crafting highly-personalized automated marketing campaigns
- How this hyper-personalization and new AI has the potential to influence, change and even control our behaviors
- What companies can learn from Apple’s “Evil Steve” test in designing products and experiences to safeguard future ethical misuse of data
- A glimpse into what could be a solution to the ethical dilemma of the capabilities Sandra studies: federated learning—a new form of data modeling that protects the individual’s data while delivering high-quality insights
_______________________________________________________________________________
Episode Timeline:
00:00—Highlight from today's episode
01:26—Introducing Sandra+ the topic of today’s episode
03:48—If you really know me, you know that...
04:26—What is your definition of strategy?
05:10—The two steps of psychological targeting
06:05—What has changed in how psychological targeting is implemented?
08:41—How can we differentiate extroverts from introverts?
10:09—The manners in which psychological targeting can be intrusive
10:55—Replicating old-school communication online
12:30—Dynamic personality states
15:52—What are 'good' applications of psychological targeting?
18:19—How organizations can ensure they use psychological targeting ethically
20:40—Safe ways to collect data without compromising individual privacy
23:05—Can psychological targeting influence internal company behavior?
25:42—How companies can align themselves with diverse individual identities
26:48—What skills and capabilities should organizations develop to adapt to AI-driven personalization?
26:26—How can people follow you and continue learning from you?
_______________________________________________________________________________________
Additional Resources:
Personal website: sandramatz.com
Link to book:
Thank you to our guest. Thank you to our executive producer, Karina Reyes, our editor, Zach Ness, and the rest of the team. If you like what you heard, please follow, download, and subscribe. I'm your host, Kaihan Krippendorff. Thank you for listening.
Follow us at outthinkernetworks.com/podcast
Kaihan Krippendorff: Alright. Sandra, thank you so much for being here. It is a thrill. I got to see a Ted Talk that you gave a few years ago and so much more it's nice to see it all come together in this book, Mind Masters. Thanks for being with us.
Sandra Matz: Well, thanks for having me
Kaihan Krippendorff: I wanna start with the same questions. I ask all of our guests. The first is just to get to know you a little bit personally, could have nothing to do with your work, intellectual interests. Could you complete this sentence for me? If you really know me, you know that.
Sandra Matz: You should not approach me before I've had my first cup of coffee. I think that's you find that out really quickly.
Kaihan Krippendorff: Who's the standout that we meet before your coffee?
Sandra Matz: We thought it I'm just grumpy and not interested in anything. I think when you typically talk to me, I'm I would say I'm curious. I'm excited, but just without coffee, it is nothing.
Kaihan Krippendorff: Gotcha. Great. And this is a podcast on strategy. So my question's gonna be and I asked I've asked this question a hundred and 40 times to a hundred and 40 different experts, and we always get a different answer. What's your definition of strategy?
Sandra Matz: It's funny because I asked the same question to 1 of my colleagues once because I'm in the management group at Columbia Business School, and I was put on the strategy search committee. And I had no idea. So I asked him, like, what is strategy? And they told me it's all about trade offs. Still didn't know.
So I think the way that I think about strategy is, essentially, you need to figure out what you want and what you don't want and then figure out the steps of how to get there given the constraints. So that's my simple version of strategize
Kaihan Krippendorff: that what you want and don't want. Love it. Thank you. So there's so much I wanna dig into here, and we won't be able to cover all of it. But I think it's, you know, kind of start with the definition of psychological targeting is a term that you coined makes so much sense now that I've thought about it.
So tell us what is the definition?
Sandra Matz: Yeah. So the way that I think of psychological targeting is essentially 2 steps. 1 is how do we take the data that we generate every day, everything from social media, to credit card spending, to the data that's gets captured by a smartphone, turn that into psychologically meaningful constructs that could be anything from personality, moral values, leadership qualities, mental health scores. And then as a second step, how do we tap into some of these insights, into these psychological motivations, needs, preferences to potentially shift the way that you think, feel, and behave.
Kaihan Krippendorff: Gotcha. And but it has it's becoming newly possible to do this differently than before. Right? This is what this is a big part. We're gonna explore that. Just maybe at a high level What has changed?
Sandra Matz: It's a great question because I think there's there's 2 almost waves in which this has changed. The first 1 is just the fact that we have all of this data and that we then also had machine learning models that can take data and translate it into something like the psychological construct and also distribution. Right? So the fact that we can target people on Facebook that we can send out and content that is hyper personalized that was the first wave I would say. Second wave and we might get to this more later is AI has democratized them.
I don't need a proprietary model anymore that translate something very specific into something else that's very specific. I can just pop something into chat, GPT, and say, hey. Here's the social media post of person x. What is their big 5 personality profile? What are their moral foundation?
And then, obviously, also, AI is incredible taking some of these dimensions and creating content that speaks to someone who is extroverted, someone who is more impostive, someone who is monorotic. So that's the total second revolution, I would say.
Kaihan Krippendorff: Yeah. I understand. I hadn't thought of yeah. I mean, you have the feedback loop, more information coming too, and then we have the customization loop and AI being able to not actually playing a role on both sides. Tell us a little bit about extrovert versus introvert.
How do we know whether someone is 1 or the other?
Sandra Matz: It's a great question. And I'm just using them very liberally because, technically, so their part extroversion is part of the big 5 personality framework, and that has these 5 traits, openness, conscientiousness, extra version, agreeableness, neuroticism. And the way that they were developed is was trying to capture differences in the way that people think, feel, and behave. And they're very relatively pragmatic approximations. Right?
It kind of collapses the complexity of who you are into your more extroverted, more open minded, and so on. And they usually not thought of this dichotomy, so I said, like, extroverted, introverted. It's usually a continuum. They can be extremely extroverted anywhere to extremely introvert, and extroversion is 1 that we'd use quite a lot because it kind of distinguishes the way that we interact with our social environment. So it's relevant for quite a lot of behaviors.
Like, which jobs do you like? Which music do you like mental health? So it's just 1 of these dimension that's driving a lot of behavior.
Kaihan Krippendorff: Yes. Right. So it's just 1 example. But the way that I did, you know, I have done my MBTI test, and I know on the INSJ, and I took a test, and I thought about it and observed. But now the way that someone would know that I'm an introvert is very different.
Right? So how can that be done?
Sandra Matz: Yeah. Mhmm. So it used to be as you said, like, you take a test, and VDI is actually a bit challenging because as you said, they just dichotomizes. Right? It puts into 1 of these buckets.
But, essentially, instead of asking you questions, like, extra version questions would be. I talked to a lot of different people at parties. I'm the life of the party. I like socializing. I get energy from social interactions.
But instead of asking you these questions, we can essentially just observe all of the traces that you leave as you interact with technology. So instead of asking you about whether you socialize with a lot of people, I can just see how often do you interact with other people by tapping into your phone senses, for example. Might so just take your GPS location, And I know exactly where you are, so I might know that you are spending the night in a busy pub where there's usually a lot of activity happening. Or I can tap into your Bluetooth and see that you're surrounded by a lot of other people. I can check your credit card And, again, see that you're hanging out at concerts and bars, or I can look at your social media while you talk about the fun weekend that you've had.
So instead of asking you all these questions, I can just literally follow your footsteps and see what you've been up to.
Kaihan Krippendorff: What do you say to people who, I'm sure, often hear you say that and say that sounds that feels really creepy.
Sandra Matz: I would say that's absolutely correct. So I do think there is it's it feels incredibly intrusive. Right? So the fact that someone just duly based on the digital footprints that we leave can make inferences about, you extroverted, intraverted. Not just that.
We can learn about your mental health, like your sexual orientation, whether you're neurotic or not, whether you're impulsive or not, So there's so many things that we can learn that are incredibly intimate. And I think it's absolutely right to be worried that this is not only possible, but it's also used by companies to then not just peek into your psychology, but potentially also user to change the choices that you make.
Kaihan Krippendorff: Mhmm. Right. Yeah. That's what we're gonna get to. Because up until now, it's been sort of identify a kinda generic profile of someone, a customer or a voter.
And then send them from a selection of messages. Now I think we're talking about much more individualized and the ability because they're interacting with us, not just taking information in. There is now this feedback loop where we're actually able to influence behavior. Is that it might could you tell us
Sandra Matz: Very much. And so for me, the interesting part is in a way that's been the case, if you think about how did old school communication work, the salesperson was trying to figure out who's on the other side, make inferences about their preferences in psychology, and then adjust their communication to that. Now fast forward, we're trying to replicate in a way the same thing online. We're taking your data. Like, those are the observations of the salesperson.
We're turning it into insights and intelligence, and then we're using that intelligence to customize the pitch that we have, the content that we show, and so on. And, again, AI has just put this on steroids because you can automate the entire process all the way from Here's the interactions that I see with the customer who okay. Now that I know who person x is, let me come up with a perfect messaging, with a perfect image to go with. And then based on my interactions with this content, I can optimize even further. Right?
So I don't necessarily rely not rely on this initial relatively broad category of he's extroverted or introverted, but I can fine tune in Zoom and even more.
Kaihan Krippendorff: Yeah. And you talk even about at the individual level, now we can even get more granular because before your coffee, you were a different person than after the coffee. And so tell us a bit because I've always been intrigued by that that I have felt that My father is a communications professor, and I remember 1 thing that he wrote about was just that we have these different identities that in the morning, I'm a commuter, in the afternoon, I'm a coworker, in the evening, I'm a husband, or you know? So talked this a little bit because I have not seen people really, like, kinda dig into the fact that we have multiple i's and multiple identities. Yeah.
Sandra Matz: Mhmm. And for me, that's 1 of the interesting new development meant in personality psychology. So it used to be the case that we think of personality is something that is relatively static. Right? So even if you have the multiple identities, usually, it's just puzzle.
That, okay. Every morning, you have this identity. It was rare that those identity disquashed. Right? Oftentimes, it's true.
We can't act out of characters. So if you're generally really introverted and your job requires you to be extroverted, you can totally do that, but it typically takes a toll. And now what we do know is that people are much more dynamic based on the context. Right? So it's even if you're very introverted generally, if you're surrounded by a lot of other people or you engage in, like, an activity that's kind of social and it's exciting, most of us feel more extroverted in that moment than we typically do.
The way that personality psychologist think of personality right now, it's as a distribution of states. So instead of having a 1 time measure that says you're the eightieth percentile on extra version, we can sample, let's say, we can ask you a hundred times over the course of 4 weeks how extra burden do you feel right now? Like, how social do you feel right now? And we will get this distribution that still tells us something about your general 10 right? Because the mean of that distribution is essentially, on average, if I don't know anything about the context that you're in, you're like, somewhat on the eightieth percentile.
But then I also know that, well, maybe currently, he's, like, surrounded by friends. He's, like, at this concert, he's having a really good time. So maybe I'm gonna adjust my estimate and say, maybe right now, it's the 80 fifth percentile. Chances that you're ever gonna be the tenth percentile is still very slim. Right?
You kind of have to be confined to solitary, like, not seeing anyone for multiple days to maybe feel that way. But we get a sense that says, okay. Here's how you generally behave. But also allowing for this dynamic adjustment based on context and situation.
Kaihan Krippendorff: Fascinating. And I wonder I don't know if I read this in your book or not, but it's just a thought is, you know, as more of our interactions are less with physically with other people, but digitally with other people or digitally by ourselves, I would I would think at least for myself that my variance is wider when I am not with other people. Being around other people, I sort of feel like I they expect me to be a certain way, and so I'm more likely to be that way.
Sandra Matz: Could be conforming am.
Kaihan Krippendorff: Is this what you call psychological situations? Is that are we talking about the same thing?
Sandra Matz: Yeah. Mhmm. So it's essentially, like, the more you know about situation Right? So the more you know that the situation encourages people to be social or to be scared or to be nervous, the more you can also make adjustments to your estimate of who the person is.
Kaihan Krippendorff: Mhmm. Gotcha. Okay. So I wanna get to, you know, the potential downsides and risks and how companies can think about managing against those, but let's start with the good news. What are some of the good applications of psychological targeting?
Sandra Matz: Yeah. So for me, there's pretty much for every use case that you could think about in terms of more nefarious purposes. I think there's a good use case. Right? So think about telling people stuff that they don't need.
You can use it to help people save more if we manage if we can use it to keep people in their echo chamber and just cater to their preferences. You could technically also use it to see, hey, here's stuff that they don't know, but they should know. So expanding w in the world. My personal favorite is probably the mental health space. Just because there's so much opportunity.
And I think that the nefarious use cases is obvious. I think back in 2015, there was this new story that broke of Facebook predicting where the teenagers were suffering from anxiety and depression and then selling them out advertisers that's obviously not exactly what we wanna see. But you could imagine if there's an ability to predict whether someone might be suffering depression, not when they fully enter the depression, but early on, because maybe they're your kind of phone tells you that their phone tells you that they're not leaving the house as much anymore, there's much less physical activity. They're not making taking as many calls. You could actually send this early warning signal and say, hey.
Maybe it's nothing. Right? Maybe you're just in vacation, but there seems to be something off you're deviating from your typical behavior. Why don't you try and get help? And then, of course, I think with all these, like, conversational agents that we have right now, there's also a lot of opportunity for treatment because there's a huge gap.
So for every, I think, 100000 people looking for a therapist, there's 13 trained professionals. So there's a huge gap in It's like it's and you can imagine that this is average, so the gap gets even wider in certain parts of the world. And it's, again, in an ideal world, if you have access to a great therapist, do you probably wanna keep going? Yeah. If you don't have access or you would just compliment your existing therapy.
There's there's so much potentials talking to these agents who can both understand what you might be going through, and then also digest everything that we know about psychology and treatment. Right? So they're incredibly good. Of reading scientific papers and then having conversations that follow the perfectly demand before, here's how to treat PTSD in the arm.
Kaihan Krippendorff: So it seems to me, like, that has a lot to do with the intentions of the organization. And so some organizations, like, I think, Apple, for example, in my experience as a consumer has done a good job of positioning yourself as a brand that you can trust that has the right intentions. Meta, not. So what advice do you have for an organization that wants to make sure that they continue doing the right thing?
Sandra Matz: Yeah. I think and it's a great question. So to be clear, there's, like, different incentives that these companies have. Right? So Apple is a beta c, so their end customer is the users.
So I think they have a an easier time advocating privacy than the when you're selling data. But the 1 thing that I've seen companies like Apple do, which I think is a fascinating thought experiment. So the way that they that they think about it is they call it the evil Steve test, and they just evil Steve. So, like, it just and now it will be, like, different, but essentially back in the day when Steve Jobs was the CEO. The idea was, let's imagine we are building this product right now, and we're collecting data and for very specific purpose.
Right? All our intentions are brave. We wanna help the consumers better experience, have a better product. But let's imagine that tomorrow, we get a different deal with fundamentally different values, but we still feel comfortable collecting the data and setting up the product the way that we're doing right now. And if the answer is no, if the answer we're worried about connecting all of this user data because even if we don't abuse it right now, it could be abused in the future, then maybe we just have to go back to the drawing board and do a better job.
And I think it's the reason for why I love this thought experiment so much is that, typically, in the moment, right, everybody's excited about the product. If the engineers had been working on the coding and business people see all of the opportunities, so it's very easy to get carried away with the momentum of, like, no. We all are aligned in the values, and we all want the best for the consumer without necessarily realizing that data that recollect are oftentimes permanent, but leadership of companies or governments are not necessarily the true. So I think it's a nice way of thinking through these potential dark futures in the here now and baking it into the process so that it just must have that data you go through before the product is launched.
Kaihan Krippendorff: And what I found eye opening in your book is just probably something that other people know. But for me, it was a new idea that there are safe ways to collect the data so we don't have individual data, but we can still kind of have the benefits of. Can you talk to that a little bit?
Sandra Matz: Yeah. Mhmm. So there's and some of it means actually not collecting the data. So I think of it as less as actually more in many cases. So there are technologies now that allow you to essentially extract intelligence without collect the data.
Let me give you an example. Medical space is 1 that I personally care a lot about because medical data, right, if you think about history's lifestyle choices, they're, like, very sensitive data. But, also, if you were able to pull that data, it could be incredibly help. Right? We could figure out how disease works.
We can figure out which treatments are effective for which group of people and so on. But oftentimes, we don't want this data to live in a go place. Right? I don't want to send all of my data to a pharma company and then have to trust that there's not gonna be a data breach, but not using it for something that I don't want them to use it for. Yeah.
So there is actually this really interesting technology that I think not that many people are talking about, but allows you to still offer superior personalized product and services without the need to collect the actual user data. And Apple is 1 of the companies using it. So if you take your iPhones, for example, and the way that Apple trains Siri, so that's the voice and speech recognition software. Right? The way that they do it is instead of you sending all of your voice data, do a central server of Apple and they process it there.
They learn how to recognize your voice. They learn how to train their models. What they do so they send the model to your phone. Your speech data never leaves its safe harbor. It always stays on your phone, it just gets locally trained because you have a super computer in your hands.
Right? So the phone that you have is so many more times powerful than the rockets that we use to send people to the moon with. And so we can just locally process the data, and we can learn how you talk, and then what you do is you send the intelligence back. So instead of sending your data to Apple, you send the intelligence. So they still learn from everybody talking.
Right? So their model gets improved collectively, but you retain your data and you retain your privacy. So that's a way in which we can build these better products while still kind of safeguarding our data in in our hands.
Kaihan Krippendorff: Got it. Excellent. We could double click on that and unpack that further, but I know that we were reaching towards the top of our time with you. And there are a couple things that I wanna make sure we also cover against so many questions to ask. But I would like I'm really interested in how this and if this can be applied internally to behavioral trains.
Like, there's 1 piece of 1 book on writing, and it's about companies that adopt customer lifetime value as the primary metric by which they run themselves. And so you can get this data better distinguish who are in high value, a little bit of customers. But then when they want to adopt it, it's hard for them to get behaviors internally to shift because for a for a number of reasons, but we wanna treat all customers well. And so CLV says we should treat them differently. Can we use this to shape internal behavior?
And what does that say about the future of culture, management, or change, or whatever you call it?
Sandra Matz: I think it's such a great question. I've been preaching this for a long time. Right? Because we spent so much time our company's been so much time and effort trying to understand their customers. What is it that they want?
How do we best serve them? And yet they rarely do this internally. And I do I should say that I do think that there is a slight difference because I think employees are just very tuned to this idea. Like, a company is trying to use the data that we generate to make predictions. So I think internally, It's also true externally, but it's super important to take employees along the way and involve them.
Right? So instead of just making passive predictions, I would always include them in the process. But we've done some studies, which essentially same principles apply. Right? Once I can tap into your psychology and understand your motivations, I can also serve you better.
So in our case, we actually work with car salespeople and the rewards and recognition platform. That they had. So, typically, the more cars you sell, the more points you get, and you can put them towards the rewards that are on the platform. Now there's so many products and services there that you can spend your points on. It's impossible to find something.
Right? The same as Amazon. It's just overwhelming. You spend way too much time, and then you don't find exactly what you need. So what we try to do is we try to measure their personality and then store the products by what we thought was gonna be the most interesting and relevant for them.
And what we saw is that by doing that, for the people that were intrinsically motivated actually boosted the number of cars they sold. So it was, like, a way of saying, can we translate what we do with customers internally to companies to just, again, do the same? Not all employees are gonna be motivated by the same rewards, by the same policies. How do we make it such that we speak really to each and every individual?
Kaihan Krippendorff: Yeah. And I imagine, like, the relationship therefore that a company has a cuss a customer has with a brand and that an employee has with an employer becomes very different because I have the idea whether I'm in if the employer brand of Apple, let's say, or the customer brand of Verbal, I think of Apple as a persona as a identity, but now we could have such different relationships. That person can be going back to the very beginning, you know, we're different people, that brand could be different people to different peep different people in different situations.
Sandra Matz: And I think if that is done in a very transparent way, right, it's the it's the company saying, look, I wanna try and make your experience as best as I can. Here's the stuff that I would like to know about. You tell me what you wanna tell me, but here's what I can do with it. And I think that becomes a totally different story than just, like, the passive targeting that we see in the consumer space.
Kaihan Krippendorff: Sorry. We had Stephen Meyer on, and I loved his concept of the customer is the new the employee is the new customer. And I see completely the application of all your work there. Internally and the way they can shape how organizations organize. Okay.
We're reaching really the top of our time with you. I also have, like, 2 last questions. And, you know, a lot of your work has come up in the work that I got to do with our friend, Rob Walcott, on proximity, and I absolutely see this being an example of proximity.
Sandra Matz: It's just getting closer and closer. Right? So you can essentially create all of these experiences personalized in real time in a way that was traditionally limited to the face to face context, and now we can do it debatable for millions of people at the same time.
Kaihan Krippendorff: What are the capabilities that organizations need to start thinking about establishing or habits or principles or what needs to start changing internally for them to shift towards where you towards headed to where things look like they're going.
Sandra Matz: So it might have been an essentially, I think most companies have figured out that AI is gonna be the future in some way or not. I don't think it can be a siloed function. Right? So I think the same way that we have people potentially thinking about sustainability across the entire company, I think we need to do the same thing for AI. Someone at HR needs to understand how AI works, someone in marketing needs to understand it in finance.
And that's how you, first of all, kind of embedded really both internally and fairly culture, but also into product. And you also have different people thinking about, well, is this still in line with the values that we have? So it's not just like the siloed function it gets called on to make the tech work, but it's something that really gets embedded in in the culture and in the thinking of every person.
Kaihan Krippendorff: Yeah. That makes a lot of sense. And I don't think that that's great. They bring it up because I don't think organizations are yet. There.
I think that's awesome. Last question. How can people continue to connect with you, learn from you, of course, as soon as it's out, which will be 01/07/2025? They should get mind masters, the data driven science of predicting and changing human behavior. What else can people do to continue learning from you?
Sandra Matz: Yeah. So my husband and I recently set up the advanced technology and human performance lab back at Columbia Business School. So Mhmm. We have a website human minus performance dot a I. And then we try to collect a lot of the insights that we've read on about the insights that we generate through talks, collectors, and so on. But that's another way.
Kaihan Krippendorff: Awesome. Well, thank you for doing that for us. Thank you for all the work that you've put into leading up to this book and continuing, and thanks for taking some time to sit down with us and share with us. Sandra is really great talking to you.
Sandra Matz: Thank you so much for having me.