Episode 344 - Humans and AI
Transcript:
Pete: Hey, Jen.
Jen: Hello, Pete.
Pete: So as you know (listeners don't yet), I'm working away on this keynote that I have to deliver in a few months on how AI can enable leaders to be even more human.
Jen: Mmm.
Pete: And I'm kind of deep down this rabbit hole, and I've got an idea that I want you to pull the pieces.
Jen: Okay.
Pete: Because I think I believe it, but I don't know if I do. And so, let's workshop it together. The idea is that humans are a lot more like AI models than we think.
Jen: Aha, okay. Let's get into it. This is The Long and The Short Of It.
Pete: And even as I say that, I'm like questioning whether it's true. So this is very much a raw, I've taken the cup of noodles off the shelf at the supermarket and that's as far as...like, I'm not even at the checkout yet. So, this is a very raw thought. So, what do I mean by this? Humans are more like AI models than we think. I have experimented and played and continue to use various Generative AI models every single day, as I know you do, as I know many people listening to this do, and as I know is going to become more and more common. It's like, if I have a question, rather than going to Google, I go to ChatGPT, or I go to Gemini, or I go to Claude, and I go, "What do you think about this?" I'm trying to get in the habit of this, and it's becoming almost ubiquitous, almost like a part of my daily routine of how I work now. And one of the things I know and I've heard and I've come to believe is the quality of the output is determined by the quality of the input, i.e. the more context I give the AI model. The more broad I am, the less specific the answer is. So it's really, for me, about, can I give the appropriate context and ask the right question framed in the right way in order to get an output that is actually useful?
Jen: Yes.
Pete: And what often happens is, I don't do that, and I get an answer, and I read it and go, "Hmm, what if I told you this? How would you think about it then? Have you thought about this?" And I end up having like essentially almost a coaching conversation with an AI model, by giving it more context and seeing what happens.
Jen: Yeah.
Pete: This, I actually think is how we should be interacting with humans, more and more. And so, the parallel is when I work with leaders in organizations, I so often have conversations that sound something like, "I need to have a difficult conversation with Jen, because Jen didn't deliver the project on time." And I'll say, "Cool. Have you thought about why that might be? Have you got Jen's perspective on what's happening in her world that might have prohibited her to be able to deliver the project on time?" "Well, no, I just know we missed a deadline." And I was like, "Cool. Okay, so what would it look like if we found out? What if we sat down and had a conversation with Jen and said, 'Jen, where are you stuck right now? Why is this project not being delivered on time? How can I support? Where do you need me? What is the context that you have?' Let's open the aperture to a little more empathy, and promote a little more curiosity in our conversations." And the more and more I interact with language models like ChatGPT and like Gemini, the more and more I'm realizing it's the same thing. "Here is the question I have. What is the context you need? Oh, you didn't quite get it. Let me add some more. Let me try this. Let me try that. Let me experiment with this. Let me experiment with that." That, I feel like I spend so much of my time working with leaders, trying to promote curiosity, and the penny often drops, and leaders are often like, "Oh, right, I could ask more questions. I don't have to always have the answer. I could try and find out what that answer might be, by asking another human." All the while, we're all doing that with these AI models, like, "Oh, let me ask ChatGPT. Let me ask Gemini. Let me be curious. Let me ask questions." So, this is the connective tissue that is so raw in my head. I feel like we could ask questions of humans, the same way we ask questions of Generative AI models, and vice versa. What do you think? Please feel free to tear this idea to pieces.
Jen: Okay, here is what I think. I want to first talk about what's different, and then, see if there's a way to bridge the difference.
Pete: Yes.
Jen: So, what is different about asking AI versus a human? I'm not afraid of hurting AI's feelings and I'm not afraid of what AI thinks of me. So it has eliminated FOPO, which is our dominant fear of people's opinions.
Pete: So true.
Jen: And because ChatGPT is not people, we don't fear its opinion. In fact, we confide a lot in our AI models that we don't even intentionally mean to confide. But we're so not afraid of what it thinks of us that we just say the thing, instead of having to wonder, "Is this going to offend it? Is it going to cry? Is it going to get mad at me? Is it going to go tell someone else?" So like, all of our FOPO is gone. And so, we're able to communicate better. So I think the bridge is actually asking yourself, "What am I afraid of?"
Pete: Wow. That is such a big mic drop...at such an early point in this podcast.
Jen: I think about it a lot.
Pete: Oh my god, it's so true. That, the reason I don't ask the direct question of a human that I would ask of AI is because subconsciously or consciously, I'm paranoid, I'm worried, I'm afraid of, "What are they going to think of me if I do this? They're going to hate me. They're going to think I'm the worst leader ever. They're going to think I'm a terrible, disrespectful person. They're going to feel like that was too blunt. And then, they're going to get annoyed about it." It's the fear, it's FOPO that gets in the way.
Jen: Yep.
Pete: Ooh. So the question that comes to my mind then, in that instance, that we could ask ourselves (it's almost like a self-coaching question), "What if this was an AI? What would I tell this person if it was AI?"
Jen: Right.
Pete: And like all these things, this is not a blanket rule that's always going to apply. Because I think there is a real, genuine, important context for us to go, "Put on the lens of the human being on the other side of this conversation." Right? So I'm not saying, "Always use this, and be as blunt as you would with AI." However, I think often the FOPO gets in the way of us having clear conversations. The FOPO gets in the way of us going, "This is the issue at hand. How do you think we could move forward?"
Jen: Yes. So what seems, to me, to be present in both contexts is the curiosity piece.
Pete: Yes.
Jen: Which is one of the things, as a coach, that you are so good at possessing and also good at cultivating in others. The other piece is not relevant to AI, which is the empathy piece. You don't have to empathize with a robot, but you do seek to empathize with a human.
Pete: Right. And curiosity enables empathy in humans.
Jen: Right.
Pete: If I ask questions and I'm genuinely interested in what your context is, I can't help but build empathy with you. Because now, I understand more about what's happening in your world.
Jen: So, one of the things I do with clients sometimes, Pete, when they're stuck and it's a fear of people's opinions reason, we'll just talk through like, "What is the deep fear here? What are you afraid that they might say or think about you?"And so, it'll be like, "Who are you to reach out to me?" And I'll say, "Okay, so let's practice." And so, I say to them, "Who are you to reach out to me?" And then, they answer. And they're like, "Oh, okay, I actually have an answer for my worst case scenario. Maybe it's a little less scary." So now, I'm wondering, would this be a good AI prompt? If you don't have another person in front of you, to go to your AI and say, "I need to reach out to this person. I am so afraid that they're going to be mad at me for reaching out, or say, 'Who do you think you are?' Can we just run through that scenario?"
Pete: Yes, totally. "And help me through it. How might I frame this in a way that means it's less likely for them to go, 'Who the hell do you think you are? Do you know who I am?'"
Jen: Or if they do say it, that I'm like, "I actually know who I think I am."
Pete: Right, right.
Jen: "This is who I am. I'm the person who asked you the question."
Pete: Right. Yeah, I love that use case. That's brilliant. The other parallel, in my mind, is, we humans each have our own very specific, very unique data set. That what I do when I use Gen AI, sometimes I ask the same question of various models because I know that they each have access to different data sets. Like Google's Gemini has access to YouTube, for example, which is a ridiculous corpus of data that it can tap into. And in theory...I don't know if this is actually true, but in theory, other Gen AI models don't have access to that, and they might have access to other things. And there's this whole like arms race going on, at the moment, about who has access to what data, and who can buy what data, and what relationships do we have with platforms like Reddit, and on and on it goes. Essentially, the argument being at some point, the only differentiator in this technology is going to become what data you have access to. And so, if I have a very specific question about music, would I go to an AI model that I know has the best corpus of data for music? Or if I have a specific question about something that might be in video, would I go to the one that has more access to YouTube? So, those sort of different data sets exist within AI models. And the thing I keep thinking about is like, that's the same with humans, that each human has their own specific, unique worldview. That, no one knows everything that you know, in the way that you know it. No one believes everything that you believe, in the way that you believe it. No one has the lived experience that you have or the worldview that you have. And so, recognizing that, how can we think about interacting with humans with this posture of curiosity, that is through the lens of, "Huh. You have this crazy, unique, amazing, wonderful, rich data set that I could tap into by asking you questions. And can I show up in service of that? And you could teach me what you know, because I don't know what you know."
Jen: I love that so much. Well, and I think there's a flip side to the data set too. So I don't know if you do this, but I occasionally go into my ChatGPT memories and make sure that everything that is being stored in the memory is accurate. And sometimes, it has some cockamamie idea about me, where it's like, "Jen Waldman loves oranges." And I'm like, "Actually, I despise oranges. Don't even peel an orange near me, because I cannot be near a peeled orange. Get it away from me. So I don't know how it came up with the idea that I like oranges, but I hate oranges." But it is seeing every one of our conversations through the context of the data it thinks it has about me, just like every interaction you have with every human that you know is being viewed through the lens of what they think they know about you. So a prompt I've been giving ChatGPT lately, because I use it a lot, is, "Before you answer this, what context do you need from me to come up with the best answer?"
Pete: Oh, nice. I do the opposite. I say, "What do you know about this topic, first? Like, what do you know about the workshop that I ran on giving and receiving feedback?" And it will be like, "Oh, boom, here's what I know about it, because we workshopped it last week." And I'm like, "Oh, you do have a reasonable understanding of that." Whereas, you're doing the opposite, of like, "What do you need from me?" That's good.
Jen: And my guess is that both directions work, as long as your intention as the user or as the conversationalist is to make sure that we're on the same page.
Pete: Totally. And I can't help but say, the other thing that is true in what you just described is, I've had a few conversations with leaders recently, like, "Is this the most important skill, as we enter this new world?" That is evident in what you shared, and I think is evident in how we interact, or could and should interact, can interact with humans, is discernment.
Jen: Yeah.
Pete: Just because we all have our own unique data sets with our own specific worldviews and our own specific experiences, just like the data of an AI model does, it doesn't mean that it's categorically, factually 100% accurate every single time. And so, there's a level of like healthy skepticism and discernment that should be required when we're interacting with these things, in the same way that...not to be super skeptical of all humans, but we already do have discernment of like, "Oh, when I asked Jen this question about this thing, she told me this answer. And I was like, 'Oh, interesting. I get some of that, but I'm questioning a little bit of that. Let me see if that applies to my own world.'" That, we should be likequestioning and discerning things. And I don't know, it's like there's this weird thing about things that we read on a computer, that people just sort of take them to be true because they're written,
Jen: Yeah.
Pete: And it's like, no, no, no. Discernment is still super critical. Super critical. In fact, more critical.
Jen: So true. That is fascinating. I need to unpack that further, that we are more discerning of humans than we are robots. Like, what the hell?
Pete: Okay, here's a way to...I use this as a way to remind myself of this. Ask ChatGPT or Gemini, whatever model you want to use, ask it about something you know really well.
Jen: Yeah.
Pete: And what often happens is, it will say something and you'll be like, "I don't actually think that's true. Or you don't quite understand this as well as I do."
Jen: Yeah.
Pete: We have that perspective when we know about it. And yet, when we ask it about something we have absolutely no knowledge of, we go, "Well, I guess that's 100% true."
Jen: Yep.
Pete: "Because...I don't know."
Jen: "ChatGPT said so."
Pete: Right.
Jen: "Jen Waldman likes oranges."
Pete: Exactly. I feel like this is true of news stories. You know? If there's a news story about an event that I was at, I'll go, "Oh, interesting. They took that angle of that event. That's not actually what happened, because I was there and that was out of context." But then, I read the next article about something I wasn't involved in, and I'm like, "Well, I guess it's true, because it's in the news."
Jen: That is so interesting.
Pete: And so, maybe the connective tissue there then, if I'm still thinking about this raw noodle, is, in the same way that humans are unbelievably flawed and vulnerable and unique, so too is AI. It is flawed. It makes mistakes. It trips over itself. It tries to convince you of things that aren't actually true. There is like humanity kind of baked into it, given the fact that the data is reduced by humans in the first place.
Jen: Hmm. Wow, Pete. I think that your thesis statement that humans are like AI models is so provocative.
Pete: Sorry.
Jen: No, in a good way. That like, I imagine all ears will perk up when you make this statement. Because my initial impulse is like, "No, that's not true. I'm discerning."
Pete: Right.
Jen: But if ChatGPT told me, I'd be like, "I really should consider that."
Pete: Yeah.
Jen: But I do think people are going to perk up because we are so unseasoned and afraid of interacting with this technology, and are so afraid of losing our humanity, that I love that you're pointing out how we can not only retain but magnify the humanity of ourselves by using these tools.
Pete: Right. And in case the other point I made wasn't clear, I'll try and say it again in another way. That, I spend so much time with leaders trying to help them see that curiosity with other humans is one of the biggest superpowers they have available to them as leaders. It enables you to build empathy. It enables you to scale your impact as a leader. Because you can't do everything. You have to promote others, to do everything. It enables you to coach other people. It enables you to learn things. And I feel like I'm having so many of those conversations, and I'm going, "Why don't we know this about leading other people and working with other people?" And yet, when I look over it, the best practice of leveraging AI, everyone's talking about, "Be curious. Ask questions. Clarify what you want," all of the things that I think are the same as how we should be leading people. So, that is the other connective piece that I just can't shake. It's like, if you're thinking about the best way to use AI is to be super curious, guess what? The best way to interact with someone in your team is probably also to be really curious.
Jen: And that is The Long and The Short Of It.