#TDSU Episode 271:

What makes us human

with Jeff Moss


Jeff Moss returns to chat the rapid evolution of AI in the workplace.

  • ⏱️ Timestamps:

    00:00:00 - Legal automation and AI's impact

    00:00:30 - Meet the standup team

    00:01:17 - Jeff's journey in customer success

    00:02:17 - The perfect haircut experience

    00:03:11 - AI: Savior or just a tool?

    00:05:35 - Education's AI curriculum

    00:06:14 - Gaps and challenges in AI

    00:12:12 - Responsibility in the AI era

    00:28:01 - Future skills and AI's evolution

    00:37:32 - Philosophy, reasoning, and AI

    00:46:18 - Lessons from food innovators

    📺 Lifetime Value: Your Destination for GTM content

    Website: https://www.lifetimevaluemedia.com

    🤝 Connect with the hosts:

    Dillon's LinkedIn: https://www.linkedin.com/in/dillonryoung

    JP's LinkedIn: https://www.linkedin.com/in/jeanpierrefrost/

    Rob's LinkedIn: https://www.linkedin.com/in/rob-zambito/

    👋 Connect with Jeff Moss:

    Jeff's LinkedIn: https://www.linkedin.com/in/jeff-moss-ep/

  • [Jeff] (0:00 - 0:22)

    One is just like redlining agreements or like I think before you have to go to law school and go all these things. I've seen some amazing things there where that can be automated. We can say, hey, review this, identify if there's any sort of risk, act like you're somebody else that was trying to sue me, whatever.

    Those types of prompts help you to be able to stay locked tight. And it is also, is it with a lot of those things, is it good enough? Is it protecting me at least from the big things?

    [Dillon] (0:30 - 1:16)

    What's up lifers and welcome to The Daily Standup with lifetime value where we're giving you fresh new customer success ideas every single day. I got my man Rob with us. Rob, you want to say hi?

    What is up? And we've got JP with us. JP, do you want to say hi?

    Sup? Sup? And we have Jeff with us.

    Jeff, can you say hi? Howdy. Howdy.

    And I'm your host. My name is Dillon Young. Jeff, welcome back.

    Thank you for joining us again. That means we must have done something right the first time, or maybe we just didn't have enough time to scare you away. We're going to change that this time.

    We're going to talk for a little bit longer. Before we do, Jeff, introduce yourself, please.

    [Jeff] (1:17 - 1:48)

    Yeah. So I've born and raised in the customer success world. Joined a startup out of college, did kind of every job under the sun, and then spent a number of years consulting different companies just on customer success, retention, expansion, even getting into sales and product, all things that help customers stay longer and keep purchasing.

    And then I've also recently, currently I'm a VP of customer success at the same time doing some consulting on the side. So playing the operational role, but also giving some advice role.

    [Dillon] (1:49 - 2:17)

    Flirted with the dark side, sales. I did too. I just don't like to talk about it.

    It's like my tour of duty in the military. I don't know which dark side you were talking about. I know, Rob.

    Thanks for consulting. The only pure thing is CS, guys. Don't you know?

    Jeff, you know what we do here? First, actually, I have a question before we kick it off. Did you just get a haircut?

    Looks so fresh.

    [Jeff] (2:17 - 2:35)

    I'm trying to look fresh. You know what? My dream in life was to have a barber that I could sit down and say, you know what to do, the usual.

    And I achieved that about four years ago. And I may never move from where I live because I sit down, I don't say anything, and it just, I come out. It's perfect.

    [Rob] (2:36 - 2:57)

    That sounds like the opposite of my ideal barber experience. If I had a barber who wasn't me, I know. I know.

    I cut my own hair, Jeff. If you're ever swinging through Boston, you know. Get a little cut, a little touch up.

    I'll hook you up. But the idea of not saying anything in a barber chair, if I'm there, it's purely to say things.

    [Jeff] (2:58 - 3:09)

    No, no, no. We're friends. We're friends.

    I'm not doing the Ron Swanson, but I don't have to tell him what to do with the hair. He just gets it done. Then we cool.

    [Dillon] (3:11 - 3:24)

    Okay, Jeff, you know what we do here? We ask every single guest one simple question. The only thing that really changes is how long we talk about it.

    What is on your mind when it comes to customer success? Why don't you hit us with it?

    [Jeff] (3:25 - 4:11)

    Yeah, I would say there's been a lot of talk about this recently, but what's on my mind is obviously there's a lot of buzz on AI. It's on everybody's mind. But I think one of the things I've been thinking about is what are the gaps that it's not filling?

    Or people have been talking about this progression of like, yeah, it helps the productivity, but we got to have it do more. But there's just kind of this feeling that it's just going to solve all of our problems. But I think you've started to see some people posting about we have a data problem.

    We have a problem with how we train it or what we apply to it. And I think if we just think it's going to come in on a white horse and save the day, I think we're actually far, far away from that. But we are close to a lot of things if we think about it properly in terms of how it can impact retention, prediction of churn, or even on the expansion side as well, how it could facilitate things too.

    [Dillon] (4:11 - 5:34)

    Let me fire this up. JP, I bet you saw this too. I'm going to see if I can.

    This is a headline from a recent newsletter from Pavilion, their newsletter top line. I actually haven't read it, but it says why 85% of AI projects are expensive failures. JP, did you see that headline?

    No, you hadn't seen that one. I know you follow that stuff, but you follow that newsletter. But I haven't read it, so I don't know if they mean like products that companies are launching, or they mean initiatives that companies are launching that utilize AI.

    But I think the topic is apropos for where we're at currently, which is everybody's grappling with it and trying to find ways to use it. I actually just had a conversation on a different podcast about AI in a number of different capacities, which I can sprinkle in here. The thing I found super interesting, and I don't know if this goes anywhere, but there was a recent law passed in the UAE, I think it is.

    Have you guys heard about this? They have to start teaching a curriculum on AI in every school starting at the age of four. That's bananas, right?

    [Jeff] (5:35 - 6:13)

    Well, I think to your point, I think the focus is right in terms of you've got to be up on the times, and typically even schools are always behind the times. I had a marketing class in college, and they were like, this is GE. And I was like, we're beyond GE right now.

    We're about 40 years past this being the gold standard. So I think obviously there's something there. But to that point, it's like, I would love to know what the heck you're going to teach about, because it's one thing to just interact with it or things like that.

    It's another thing to actually say, we've got to apply some discipline. We've got to have some theories. We've got to have some perspectives so that we can get the most out of it.

    And so that would be what's interesting to me as well.

    [Dillon] (6:14 - 6:44)

    That's a great call out. Great call out, that if they started building that curriculum now, do they then stop? And in six months, it's out of date?

    Does it even matter for a four-year-old? You're probably teaching very basic concepts about how to trust information and so on and so forth. That's a big concept.

    JP, I'd love to understand. Let's just answer the question at face value. What do you think the gaps are with AI today?

    What is it capable of versus not capable of? And I'll give you a challenge. You can't use the word nuance.

    [JP] (6:46 - 8:39)

    Yeah. So some of the challenges is with just the human element of resistance. I think that's one of them.

    Whatever emotional feelings people have toward it, I can certainly say in my case, there was definitely some of that present. But beyond that, just getting down to using it as a tool, it's maybe the choice problem. It's started there as well.

    If you go to something, you're like, wow, AI is so powerful to do all these things. It's like, well, where do I start? And I think that's why I was looking and I saw things about learning about prompts, prompt engineering, prompt engineer being a role.

    So I'm seeing these different things where it's like, how do I use the prompts effectively? How do I use this technology effectively? And then I guess lastly, there's the ethical challenge.

    Some people think it's... People have different levels. Some people will submit a whole college essay written by AI and not check it or anything, which is one way to do it.

    But maybe somebody is writing a script for something and they're like, hey, give me something to start with. And they want to start from there and then use their own input. But they're just using it to maybe outsource some of the more, I guess, difficult things or some of the writer's block, whatever you want to call it, so they can get to iterating faster.

    And so, yeah, I think that there's the ethical element as well. How are people using this? You go on, am I messaging people on LinkedIn with AI response?

    That's another level. Who am I connecting with now? Your representative or is this you?

    [Dillon] (8:39 - 9:06)

    I just want to ask one quick question, JP. This is my licking the finger and testing the wind here. Don't you give me that face.

    I fixed it. I fixed it. Would you feel the same way if you knew the person had spent a great amount of time in training an AI on how to talk to people automatically on your behalf on LinkedIn?

    [JP] (9:07 - 9:22)

    The thing is to tell, I mean, I read the message, right? I don't have an AI representative to read the message. I read the message with my human eyes.

    So if I read the message with my human eyes, and it looks like it's from AI, then fail, failure.

    [Dillon] (9:23 - 9:25)

    Even if you understand the spirit of it?

    [JP] (9:25 - 9:27)

    Yeah. That's not you.

    [Dillon] (9:27 - 9:32)

    Okay, cool. Jeff, I just wanted to get that question in. You go ahead.

    [Jeff] (9:32 - 12:12)

    Yeah, I'll go to a couple, maybe more practical ones. So, and I'll give you a couple examples that I found, you might find interesting or so, which is one, I think one of the biggest problems is the output that AI gives you looks just like good output, but you can't tell you, unless you know what you're looking for, you can't tell the difference. I'll give you a quick example.

    I use AI to help with some of my LinkedIn posts. And sometimes I make a mistake where I'm saying, hey, I want to post on this. And then before I type in my content that I would like to have buttoned up a little bit, I'll accidentally hit enter and it goes in.

    So I just have the topic and it'll produce a post that looks just like the post that I want it to be. But for my eyes, this post sucks. It's not good.

    It's got a bunch of information. It looks like a killer post. It's got the formatting, it's got all the goodies in there.

    And then what happens is I go, shoot, let me put all my content in. And then I hit submit. And then another output comes and it looks just like the first one, but it's a hundred times better.

    And I use that as an example to show that I think the problem is even for the, let's say it goes four-year-old kids in Dubai or whatever, their example, which is they're going to learn, oh, I just put these things in and an output comes in and check I've completed success as AI has worked for me. And my issue is, is that it looks right, but unless you have true human expertise on what you're trying to do, it is not right. And so, so that's one big gap.

    And I think the other big piece of that is another side of that coin is, I think people are looking to abdicate their responsibility. You're saying we never figured out how to retain our customers. We never figured out what actually matters to them.

    Perhaps if we integrate all of our data, it'll just bubble up to the top and magically tell us the answer that we never wanted to figure out, or we never put in the required effort to figure out in the first place. And so again, it's, I guarantee it will give you an answer. I saw a demo on LinkedIn the other day, somebody has a new integrated CS AI tool, and they put it in questions.

    Hey, tell me what's going on in my account, what I should do. And it popped up. And I said, it looks like it would be useful, but if you actually stare at it, it's only telling you these 10 accounts are about to die.

    I have no idea what you should say to them, but how about you reach out to them? And it looks, it has all these outputs, tells you all the usage. But again, there's a huge gap here because it looks like it's useful.

    Today, I should talk to these seven people. I have no idea what to talk about. And two, all it's told me is that they're dying.

    It doesn't tell me why they're turning anything like that. So those are just a couple of different examples to me of these gaps where you still have responsibility to figure out what success is, and then push that theory into these tools so that you can then use them to be more productive or more accurate or whatever those things are that you're trying to accomplish. But those are a couple of the gaps that I've found.

    [Dillon] (12:12 - 12:14)

    I have questions, but Rob, go ahead.

    [Rob] (12:15 - 14:06)

    I wrote down five, five gaps. I wrote down ethical gap, like JP was saying, I was thinking about that in terms of, I don't know if you guys heard Grok recently went off on some tangents that were prompted by some potentially rogue ex-employee who may or may not run the company who made a code change in the middle of the night a couple of weeks ago. Did you guys hear about this?

    If you haven't used Grok, it's exes or formerly known as Twitter's AI. And it posted a whole lot of opinions about South African politics, which seems oddly specific to one individual at that company. So there's definitely an ethical gap.

    And I think one of the big concerns, one of the big things I learned when I first got into SaaS was there's an interesting blog I read. It was edge.org used to have this blog and they would have different academics across different fields, like say, like answer one question per year. And one, one year it was like, this idea must die.

    And so, and somebody in that blog said, the idea that must die is the assumption that, that tech is unbiased, which is so true. And that formed my thinking around the ethical gap, but I think there's others. So trust gap, Dillon, you mentioned trust as being a, and something to be learned early on.

    I think that's a good point. There's an accuracy gap for sure. I think the datasets are like kind of small.

    And actually I treat that as a separate one. The fourth gap is a data gap. I think there's, there's a lot, for example, there's a lot of scenarios in which AI has been over-trained on like hyper-specific contexts.

    Like now even my precious M-dashes are under attack. I used to be a great M-dash user.

    [Dillon] (14:06 - 14:13)

    Are you a space M-dash space guy or no spaces on either side?

    [Rob] (14:13 - 14:25)

    No, no, no spaces on either side. Those are different. They're different.

    I think the other one has a different name. I can't remember what it's called, but now it's like I use my M-dashes just like I have done for 15, 20 years. And I look like a bot?

    [Dillon] (14:27 - 14:30)

    Like, well, your writing is also soulless and vapid.

    [Rob] (14:30 - 15:04)

    So when you combine the two. The last one I'll mention, just since this is, you know, from the customer success context, I do think there's an experience gap. Like, have you guys ever had actually like a delightful experience with AI?

    It's been pretty rare for me. It has happened very, very rarely where like I get a random ticket deflection and I'm like, great, I don't have to sit on hold, but maybe I will. I kind of want to talk to the agent and be like, yo, what's up?

    But no, I mean, I think that's a big gap too. I don't know which of those gaps you guys want to dial into most, but those are the ones that are on my mind.

    [Dillon] (15:05 - 16:35)

    It's funny. All of this is so interesting to me. And I want to, I want to rewind and go back to Jeff, this idea of it looks right.

    Or let's say I really like the example of like, these accounts are about to churn, but it gives no further direction on what to do. Well, I think that it's sort of easy to say this, and it feels like a cop-out, but I don't think it is. I really think that's the problem of the prompter of not asking a good enough question.

    If you only ask, hey, who is about to churn? That's all you get back. Like it is, AI is incredible at reflecting information back to you in a way you hadn't thought about before.

    And it, of course, or maybe you have thought about it before and it supplements it with its database, but it can't perform magic. It does. They have trained it.

    They have optimized these things to not infer anymore. What we used to refer to as hallucinations, they're trying very, very hard to eliminate entirely. But what that does is it demands a much greater degree of rigor from the prompter in delivering all of the information the model needs in order to provide a suitable response.

    Would you agree with that?

    [Jeff] (16:35 - 19:13)

    Yeah, well, and I'll back up even further. This is where I'm going with it, which is all the talk is about AI or whatever. But like I said, back to my point about responsibility, that responsibility to figure out what actually matters will impact not only the prompter to have better requests and more specific questions, but in order to have, it's almost kind of an interesting chicken and the egg.

    In order to ask the right prompt, I'd actually have to know what I'm looking for. And if I knew what I was looking for, I'd probably also solve my data problem because I would know what data should actually be in here. And there's three parts of data.

    There's your CRM, there's gong, or gong or whatever, your recorder, and then you have your product. And my issue with all three of those is, all three of them have the same problem. If I knew what I was looking for, if I'd done the work to figure out what results matter to my customers, what action milestones they have to take to be successful, and the timing of those things, then I'd probably have the proper fields in my CRM that are tracking that.

    I'd actually bring up the proper subject matter in my phone calls, my meetings, that would actually, because the other thing is we analyze your calls and sentiments all there. If I don't do proper discovery, I saw Rob was talking about discovery in another post recently. If I don't do the proper discovery, then it can't pick up the things that matter.

    And then in the product side, half the time, we're not even tracking the things that matter anyways. And then even within that, there's thresholds. I'll give you another example.

    I worked with a company in the restaurant software space. They have this tool that prints labels for when you do your food, right? You have to drink all your ranch dressings, whatever, for the week at Popeye's or whatever.

    And then you got to put labels on them. That is a thing that they had ability to track. But for example, there's a simple thing you can do.

    Anybody can do this. And you can do this now even without AI, it doesn't matter. You have to do it anyway.

    They figured out, okay, if somebody is a Popeye restaurant or whatever it is, how many items do they typically create per week? Oh, they create 100 items a week that have to be labeled. Okay.

    Did this person, did this company, this account, it's been two weeks, so they should have done 200 labels. I can know right now, even if the customer doesn't think they're at risk, that they're at risk because they have had this label for two weeks and they've only done 23 labels. And so that's a leading indicator of risk that could be surfaced properly if I know what I'm going for.

    And then to your point, and back to the full circle to your prompt, then I could say, hey, AI, tell me which customers have not done this threshold or whatever, and that I can go after. And then that also solves what I do about it because now I have a specific outcome.

    [Dillon] (19:14 - 22:41)

    But that's an analysis problem. That's not a, I think where we get stuck, and everybody has done this. When you are first introduced to AI, this is a rite of passage.

    You have to do it. You assume it is capable of more and of inference and of assuming context and all of these things that it does not have, as I was referring to previously, and I think that you're alluding to in a number of different ways here. I refer to AI as the carpool lane.

    So you have to provide the bodies. You got to have two or more. Let's refer to that as the data.

    But if you have that, you can get in your car, and then you can get in the express lane, and you will get to your destination faster. But once you get to that destination, it's still you there. The carpool lane does not transform into something else.

    It does not solve all of your problems, but it got you there much faster. There were some requirements for you to use the carpool lane, but it was effectively a shortcut for you. And that's how I think about AI.

    Let's take your same analogy. You could use AI to help walk you through how to think about what the key indicators might be for your customer set and what makes them healthy or not. You could explain your product to the T's.

    You could deliver your mission statement, give it your pitch deck, whatever, and say, this is the problem I believe I'm solving. What do you believe are the key metrics I should be looking at? And then you can continue to refine that.

    Now, then you've still got to build the systems that allow you to measure that. But then you can collect that data set, and you can deliver that back to AI. And you could say, are there any outliers I'm not paying attention to?

    You could ask it the very basic question of who's not meeting the threshold that we've created, those sorts of things. And then you can continue your conversation with AI to say, okay, well, actually, I do believe that they're healthy because they may not be printing labels, but they're doing X, Y, or Z. Okay, well, you can help your AI get better at that point.

    But if you just dump the data in and say, AI, save me, it's never going to work. And I remember, this is the last thing I'll say. I remember early on, and it may have been like Mickey Powell, who's been an AI Stan, I think, from the moment Sam Altman was born.

    He said something along the lines of like, if you're not spending 25, 30, 45 minutes thinking about and designing your prompt for the first time, like, and I'm just using sort of like arbitrary numbers here. But assume this, this is a process that would otherwise take you four or five hours, right? Like maybe it's a spreadsheet analysis, or it's something else.

    If you are not spending a, what feels like too much time designing the prompt, you should expect that you're actually probably not going to save that much time, because you're just going to keep having to re center your AI to try and get it to move in the direction you want to, you have got to be incredibly exacting with your prompt. And these prompts are sometimes three, four or five pages long, you've got to be overly descriptive in a way that is, it feels counterproductive early on. JP, why don't you jump back in, please?

    [JP] (22:42 - 26:37)

    Well, when I first spoke, that's one of the things that I said was a gap. I said prompt, I said prompt engineering. So we took a long journey around the earth a few times.

    But yes, that is one of the gaps is prompt engineering, right? That's one of them. And so I think like, you know, I'm going to actually pivot to something different.

    I think it's as someone who's like fairly creative, and just loves to be sort of in that sort of creative, innovative, iterative space. I'm gonna say that there's actually a lot of fun that can be had in AI, which also presents the issue of addiction. You know, sometimes people become over reliant on things.

    So, you know, when when MySpace first came out, I remember when that hit the streets, whoo, let me tell you something that top eight, you know, that top eight, who was in your top eight? Oh, no, I can put, you know, my generation showing my age, we used to learn HTML, just so that we could learn how to code those pages on MySpace, because that was the technology that was available at the time. And so now, hey, JP tried to draw a Dragon Ball Z dude, it didn't come out well, you know, it's like, I got him from the head, jellyfish.

    So but maybe now JP is like, Oh, wait a minute, can I use this to like, help me? So now maybe I can outsource the animation to here. And I can use this to like, iterate on my creative ideas and take me someplace further than I could go before.

    And I see some of these things intersecting when you begin to talk about art, because people, you know, will say, you know, like, for example, one time, I had the AI, I was like, I was like, let me see what this can do for LinkedIn posts. So I was asking it, you know, write a LinkedIn post. And I'm just like seeing what they got.

    Because I'm like, I know I can write something. So I'm gonna see what you got, what you got. And this thing was trying to give me some different, you know, things.

    And I said, write it like Chris Rock. I was like, right, right. It like, let me see if you could do Chris Rock.

    And it was really funny, because it did sort of like, like, when I read it, I could sort of hear like the beats that it was trying to hit. So it was like, really interesting. Now, my job, I didn't end up like using any of it.

    But it was a really interesting thing to see this idea of like, you know, a comedian style. It's not an intellectual property, per se. But that is something that's really unique to them.

    If I come out doing something like this, you're gonna be like, whoa, whoa, whoa, what are you doing? A terrible impression of Chris Rock, right? But like, there's there's this like, really fun and addictive side to AI, which is like, to reference something you said, Jeff, the chicken and the egg, right?

    If I get in, I have an idea. Maybe my original idea isn't the same as what it was. But like, where where am I going to go?

    What kind of time am I going to spend getting there? Do I start in one place and then end up someplace completely else? You know, with this, but I guess, you know, the in terms of a community, right?

    Like, this is a one to one thing with the AI, right? Me saying, let's do some creation. But then when it comes to the community, you know, to go to what Rob was saying about his em dashes, right?

    Somebody can maybe like to use em dashes or ellipses or whatever. And now we have a modern version of the witch hunt. Oh, magic.

    Oh, what is that? That paragraph was too well written, must be AI. So it goes both ways.

    Someone outreaches to me and I'm like, that's clearly AI. But then maybe there are people that are using it and it's working. I think I saw an example the other day, where someone used it for a short comment.

    The problem is people are always using it for blocks of paragraphs, and you're like, I know you didn't write that. But, you know, I'm dropping it there. I wonder.

    [Dillon] (26:38 - 26:42)

    Yeah, yeah, never mind. Jeff, you're right.

    [Jeff] (26:43 - 28:00)

    No, I agree with a lot of that stuff. I think maybe we're also highlighting too. I think obviously you want to get in there.

    You got to mix it up with it. You mentioned you got to be doing these things, you know, regularly. I think that's all well and good.

    I guess the other piece that I'm trying to push on as well is we're not, it's not all the way there yet. It doesn't tie my shoes for me yet. And hopefully it does.

    But if that's the case, then I think we still have a huge responsibility to whether you use it to subsidize some things you're doing today, please do. But you still have a responsibility to figure out the answer to these questions that would even enable you to be able to know what data to pull in to then prompt to do whatever. And so I also think, don't stop doing all the good things that you should be doing.

    Don't stop figuring out what matters to your customers, the results that matter, the actions they got to take, the metrics that matter, what lands, what doesn't, you know, those are all things that you cannot take your foot off the gas on. And then as other things become available and we can pull them in and find new applications, let's use the AI or other tools that come out there. But I also think there's a little bit of a pump the brakes moment in terms of your responsibility is not up yet.

    You still got to figure all these things out if you haven't. And until it comes in and pushes you out of the way, you have a responsibility to do that to drive value for your customers and for your company.

    [Dillon] (28:01 - 28:20)

    I want to do a quick exercise. I'm going to pose this to you, Rob. Let's think about tasks that we learn today.

    And it doesn't even have to be like tech related what you do in your job today that we don't think we'll need to learn within five years because of AI.

    [Rob] (28:20 - 28:26)

    That's a clever question to think about that one. Tasks we do today. So I thought about this.

    [Dillon] (28:26 - 28:30)

    That you learn, like you got to go to college or school or you got to get a certificate.

    [Rob] (28:30 - 28:32)

    Oh, so not even just tasks, but whole domains.

    [Dillon] (28:33 - 28:57)

    Yeah. Like I'll give you an example. Accounting.

    People go to school for accounting. They get accounting degrees. I don't think you're going to need that in five or 10 years.

    You might want to understand the principles and the concepts, but nobody has to update QuickBooks anymore, has to run complicated formulas and spreadsheets within five years.

    [Jeff] (28:58 - 29:01)

    I have one. Rob, if you're still searching, I'm going to...

    [Dillon] (29:01 - 29:02)

    Go for it. Phone a friend.

    [Jeff] (29:02 - 29:42)

    Phone a friend. One is just like redlining agreements or like I think before you have to go to law school and go all these things. I've seen some amazing things there where that can be automated.

    We can say, hey, review this, identify if there's any sort of risk, act like you're somebody else that was trying to sue me, whatever. Those types of prompts help you to be able to say lock tight. With a lot of those things, is it good enough?

    Is it protecting me at least from the big things? Yes, I could maybe adjust a comma here or whatever, but I think with those redlining agreements or revising some of these, even generating some of those additional elements to whatever contract you have, I think that's something that you're not going to have to learn. You're just going to have to learn what parts to engage with a tool on.

    [Dillon] (29:42 - 29:47)

    Hmm. That's an expensive degree. I know a lawyer hates to hear that.

    [Rob] (29:48 - 30:00)

    Actually, my buddy who just recently became an attorney, he's stoked to hear it because he feels like people are still going to pay him the same rates. Yeah, he'll still use it, which I still think is going to be a real thing too.

    [Dillon] (30:01 - 30:09)

    Yeah, the license is important because of the liability, but to eliminate so much of the processing time because you can only read so fast.

    [Rob] (30:09 - 30:55)

    Well, yeah, and I'm even going further deeper into like, I'm thinking like grade school. I'm thinking if you think of like history, English, math, science, then it's interesting because I think the thing that underpins all of those is not the topic themselves, it's the reasoning skills behind them. That's where I get a bit concerned because I've felt myself becoming more intellectually lazy at times or needing just like a kickstart to get through what might be like writer's block, for example.

    And now I'm asking myself like, geez, in 20 years, for example, we're all talking about our writing styles here. Will people even have writing styles in the future?

    [JP] (30:56 - 30:58)

    Yeah, yeah. They will and they'll be more valuable.

    [Dillon] (30:58 - 31:06)

    They'll be more valuable. Here's my question though, is you said that negatively. Why?

    As though you need to suffer.

    [JP] (31:06 - 31:10)

    It's only suffering if you don't like it. Some people enjoy writing.

    [Dillon] (31:11 - 31:12)

    That's cool too.

    [Rob] (31:12 - 31:28)

    Yeah. Negatively as in a world where we don't have our own unique writing styles. I do think that's concerning.

    I think that's, I think a writing style is, it's kind of like, you know, a derivative of a personality, you know. I don't want to just all have the same personality.

    [Jeff] (31:28 - 31:52)

    And to go back to what you said, Rob, earlier before that, you were saying, do you lose out on the reasoning skills? And I think in some ways we've run a partial experiment over the last 15 years with health scoring. You know, essentially the idea behind health scoring was we'll do all the thinking.

    You just be the infantry soldier that just, or the factory line worker that just moves the lever.

    [Dillon] (31:52 - 31:54)

    I don't think that's what health score is for.

    [Jeff] (31:54 - 31:56)

    I know, but that's how everyone's treating it.

    [Dillon] (31:56 - 31:57)

    Yeah, yeah.

    [Jeff] (31:57 - 32:19)

    And so what happens is, is that people go, well, I don't have to predict or I don't have to do things because the health score is what it is. Now, who am I to say they're not going to turn, or they are going to be fine or whatever. It's somebody else's problem if they can figure out the scoring.

    And so I'm not saying that's a perfect one-to-one, but I think it's emblematic of if once you abdicate the responsibility, if you haven't learned the skills, you can't judge.

    [Dillon] (32:20 - 32:28)

    Here's another example. I'm going to talk about reasoning skills. I think we're all old enough.

    I don't, Jeff, I don't actually know how you look like you're I'm a mysterious age.

    [Jeff] (32:28 - 32:30)

    I'm anywhere between 21 and 38.

    [Dillon] (32:30 - 32:40)

    So yeah. I think we all learned in grade school, how to use a card catalog. You remember that?

    The card catalog in the library?

    [Jeff] (32:40 - 32:41)

    Oh yeah.

    [Dillon] (32:41 - 33:00)

    Just the huge Bureau with all the little drawers and the code, and you had to figure out what the code meant. And then you had to go in and you had to find the book and then you had to figure out where that was in the library. That was a skill, right?

    But you don't need that anymore. You don't need that at all. How about another one?

    Horse riding.

    [JP] (33:01 - 33:03)

    He's going to keep going until he's right. Until you agree.

    [Dillon] (33:04 - 33:36)

    My point is, my point is just probably mostly about the way in which we view these things through a lens of a certain time period, right? Horse riding now is luxurious. It's a social signal.

    If you know how to ride a horse now, but back in the day, that was just, you just needed that to get around. And then we got rid of it and nobody was like, oh, well, we're going to lose horse riding to the technology. Cars are going to kill horses.

    [Rob] (33:38 - 33:48)

    So where I think this is different though, is like those things for most people are not fundamental to their identity and they were not fundamental to their identity.

    [Dillon] (33:48 - 33:54)

    There's a lot of people I know who don't think thinking is fundamental to their identity. You ever been to Walmart?

    [Rob] (33:57 - 34:03)

    That's where I took Lana on our first date. Walmart, Columbus Ave, Philadelphia.

    [Dillon] (34:04 - 34:07)

    Well, yeah, a couple of Ivy leaguers and you're like, let me show you the other side, baby.

    [Rob] (34:09 - 34:47)

    It was definitely a bizarre experience for her. But no, I think like we can all, we all have a fundamental sense of identity. And I think for me at least, and again, maybe this is just a new thing, but like my reasoning is so tied to who I am.

    Like that, that's the thing that worries me. It's like, it's like, it's like, it would be like, you know, what I imagined my friend who was like a professional athlete and then had an illness and he couldn't, he couldn't play sports. He was actually bedridden for like four years.

    Like the, the level that, that rocked him at his, at his core is like what it would be for me if I like lost my ability to reason through the world.

    [Dillon] (34:47 - 34:50)

    Yeah. But do you, can I share an example?

    [Rob] (34:50 - 34:51)

    Can I share an example of this?

    [Dillon] (34:51 - 35:03)

    I want an example of where reason, where you went home at night, so to speak. And you were like, man, I reasoned the shit out of that thing. I want to hear an example.

    And I know it exists. Like I'm, you know, go ahead.

    [Jeff] (35:04 - 37:31)

    This isn't going to be an example exactly what you, but I think it's illustrating some of the point. So I would, I'm not the biggest fan of college. I graduated from college.

    I don't, I feel like, I feel like most, most things that I, I weren't useless that I was there for, but there's one professor that made a massive impact on me. Okay. And he was teaching business law.

    He was a first amendment lawyer. And, and, and when I went to college, everything was just like rubric, do these things. And then you imagine you get an A and then hopefully you can bully your professor into giving you extra credit for something else or whatever it is.

    Everyone was just gaming the system. And this professor came in and said, okay, your assignment is answer this question. And people were like, oh, how many, how many pages would you like professor?

    And he's like, I said, answer the question. And they said, oh, it's like six or 10. Answer the question.

    They said, do you want us to do research? Do you want us to do data interviews? Answer the question.

    Do you, do you, and everybody's like losing their minds. Like what are you, what? And, and people were freaking out.

    And then people even dropped the class. This was like one of our first assignments. And, and because they were like, what are you talking about?

    Everybody just tells us do this, do that. And then you get an A do this, do that. And you miss something, you get a B and it literally shattered people's universes.

    And then I go in there and I, and I wrote the best paper I've ever written in my college career because I said, oh, the purpose of this is to answer the question. And so I did an interview. I never did interviews at all for any of my papers, but I found somebody, an industry expert.

    I did an interview. I pulled some data. I did all these things because I actually realized what I was doing was answering a question.

    And so everybody had been so conditioned. And I got my, I got an A in his class because I actually was living for once and reasoning rather than every other class I was doing was, oh, it just has to be 10 slides. First slide has to be this, put in a picture, put in the three bullet points and I get an A and everybody loved that.

    But I actually loved this professor because I was actually doing something for the first time. And so to me, that changed my whole perspective on life because now I go in and I am using my reasoning skills. I ended up always saying, what's the question I'm answering.

    And now I also don't have to ask for permission before it was like, oh, I have to do an interview. I have to pull data. No, I get to do data.

    I get to pull an interview. I can do anything I want as long as I achieve the outcome. And so to me, that's something that's super valuable to me that came from owning the outcome and gaining those reasoning skills that can be applied in any sort of fashion.

    [Dillon] (37:32 - 37:35)

    JP, you've been gesticulating silently.

    [JP] (37:36 - 37:41)

    I know I've just been watching you play. It's been great. I know.

    [Dillon] (37:42 - 37:42)

    I might be Italian.

    [JP] (37:43 - 38:02)

    I don't know. I don't know. Yeah, yeah, yeah.

    But there's one of my favorite movies, in fact, is all about how machines, we just voluntarily, a lot of us chose to just put ourselves in machines and not worry about anything. It's called The Matrix, one of my favorite movies.

    [Dillon] (38:02 - 38:04)

    Wait, they voluntarily did?

    [JP] (38:04 - 41:00)

    I thought they were born into those machines. Watch how it plays out. But Agent Smith talks about people accepting a program.

    You know, the reality of the world is, you know, fairly grim. The reality of the world is a result of human, the way we've done things as humans, right? Like the destruction we see, everything that's happened.

    So the machines sort of, it's this weird thing where there's sort of an inevitability of something that we, in fact, created. And I think that there's a lot of people that see that, oh, well, now this is an escape, maybe, from all these things that we created. We might as well go towards this because of the idea of inevitability.

    But of course, we have our hero, the one, or if you rearrange the letters, it spells Neo. Neo comes in because Neo is going to do what? To free people.

    And there's still people that may not necessarily want to be freed because they would rather believe that they are eating a steak than to actually eat slop. So you have people who really, sort of in this last city on earth, really value what it means to be human. And there are some machines down there that are actually keeping them alive.

    And there's a great conversation. And the Matrix, it's the second one, Reloaded. The Matrix, Reloaded, and they're having a conversation and they're talking about these machines and how they support life.

    And Dillon, you're probably reminding me of the conversation between Neo and Counselor Han, because they're talking about it. Neo's like, well, yeah, you could just shut these machines down. That's different.

    He's like, oh, right. Yeah, that's it. You solved the problem.

    Like it's that easy. But I think that the reality is that we are in a very complex, I'm not going to say nuanced, but we're in a very complex situation. And so I think that for a lot of folks, again, to one of the first things that I mentioned, which was around the humanistic resistance to AI, has to do with what does it mean to be human?

    And if somebody still really enjoys horseback riding, then I can tell you right now that there are people out there riding horses and not hopping in the fastest Bugatti that they can. So people are going to find their meaning in life. And that's sort of like up to them.

    But then there's other folks where if they never wanted to write a book, but they want to get a book out and get something, then maybe they use the AI. So I think that that's like the spectrum of things. It's like not losing our humanity in all of the advancement that we're witnessing, because it's at an unprecedented level, right?

    It's like we've never seen before, you know?

    [Rob] (41:00 - 41:11)

    I need to plug for anyone listening to this. This is going to be a deep cut philosophy reference. But if anybody reads any Jean Baudrillard...

    [JP] (41:11 - 41:13)

    I knew you were going to go Baudrillard. I knew you were going to do it.

    [Rob] (41:14 - 41:32)

    The Matrix is based on this guy's writing, this philosopher, Jean Baudrillard. And he talks about simulations. The world is full of simulations, which is interesting.

    Because a health score is a simulation of customer health. And it's interesting, because Dillon's like, I'm done with you, man.

    [Dillon] (41:33 - 41:40)

    Well, literally in our group chat earlier today, I was like, just unplug me, because... Oh, you did say that.

    [JP] (41:41 - 41:45)

    That's what had it on my mind. That's what had it on when you said that.

    [Dillon] (41:46 - 42:13)

    I think everything is a simulation. Literally what you see through your eyes is an actual simulation of what the world really is. So that's where I get a little bit frustrated with...

    We say it as though it's a revelation, but the definition of simulation is just so highly interpretable that like, yeah, sure. If we want to put the threshold way out here about what reality is, we could do that. And then I guess we're all philosophers.

    [Rob] (42:13 - 43:59)

    Maybe I should write a book. I've got something good, because it's going to tie in. You asked before, Dillon, and I want to make sure Jeff closes us out.

    But Dillon, you asked us before, can you give an example of a time when you laid your head on the pillow and you were like, wow, I reasoned really hard about something today. And to me, when you're doing that deep reasoning, a lot of times you're breaking through the layers of simulation. I'm going to give you an example.

    I was talking to a guy, a manager, team manager, and he was like, I'm working such long hours. I need you to help me fix that. Okay.

    Let's unpack that. Let's go one layer deeper. Turns out he's having a hard time delegating.

    Okay. Let's go one layer deeper. Why are you having a hard time delegating?

    He said, well, I don't want to put things on people's plates. Okay. Why don't you want to put things on people's plates?

    Well, you know, I feel like it's going to, you know, it's just going to make their work lives worse. And that having more on their plates is net worse for them and their work lives. And then, so I was like, okay, so let me just inject a little bit of a reframe here.

    What if, what if some work could be reframed as opportunities? And that if we inject this into your mental model and we say like, okay, by taking work, by not putting work on people's plates, you're actually depriving them of opportunities to grow in their careers. And I'm talking about select types of work, things like building health scores, building CSPs, like process advancement work, like the cool stuff, the fun stuff that we'd love to get to.

    I was like, what if by not putting that stuff on their plates, you're actually depriving them of opportunities to grow, which would have a knock-on effect eventually of freeing you up and you work less hours. And he was like, that was really cool. I've never thought of things that way.

    So when you break through that simulation, the surface level, there could be some juice there. Jeff, you want to close this out?

    [Jeff] (44:00 - 46:18)

    Yeah. I love that example there, Rob, because that stepping through of the reasoning happens because you're taking responsibility for the outcome you're trying to help that person with. And I think this is where I would kind of wrap it all up.

    The AI revolution, everyone says it's here, it's already passed, it's already there, whatever. It's not here yet. Objectively, it's not, because if everybody looks at their day-to-day work, you're still doing a lot of manual tasks, you're still sending emails, you're still doing all these things to customer success.

    It's not doing your whole job for you. And so if that's the case, put on your boots, tighten them up and start doing the job that you can do today. And continue to have your toe in the water by AI, look for opportunities to integrate it wherever possible, be more productive, get these things going.

    But my plea for everybody is the first thing that's going to change everything for you is taking responsibility for the outcome. If you do that, then it's going to change your day-to-day work, it's going to change how you view AI, it's going to change how you prompt, it's going to change all the other things that you're doing. You're going to find new data that you'd have to have to do this more effectively.

    And I'll give you one last example, which is the weirdest one that we're going to have today, which is, there's a show that I love called Food That Built America, and it's on History Channel, and it's way weird, but it came to my mind, I'm going to wrap up with this, okay, which is this. That was the tech revolution, if you watch it, it's a great show. I believe that all the food innovation that happened in the turn of the 20th century or whatever, in the early 1900s, was because, not because people were amazing technology wizards, and Milton Hershey was like over here at Tesla making chocolate out of his fingers.

    The answer was this. Each one of these tycoons had one specific thing they understood. The guy who came up with Kraft cheese sat in his apartment for three years reheating cheese to figure out a shelf-stable cheese.

    He wasn't a food scientist, he figured it out. Milton Hershey went and tasted milk chocolate overseas and said, this was amazing. I wonder if you could do this with real milk instead of dried milk solids.

    He figured out a way to do it. If you have the aim in mind, you will figure it out, and you will leverage technology, you will leverage tools, you will leverage your own reasoning skills to be able to make that happen. I figured out a way to bring in Food That Built America, but anyway, that's my final example to help people focus on.

    If you have an aim, you'll always figure out the how and the what, if you have the why.

    [Dillon] (46:18 - 46:21)

    Inspiring. Inspiring.

    [Jeff] (46:21 - 46:21)

    Check out the show.

    [Dillon] (46:22 - 46:28)

    Milton Hershey, by the way. Hershey's chocolate, objectively terrible chocolate, I say that as somebody who grew up.

    [Jeff] (46:29 - 46:31)

    Billions of dollars, billions of dollars.

    [Dillon] (46:31 - 46:53)

    The thing was, how can I make it as cheap as possible and pack my pockets? Dirtbag. No, Milton Hershey's great.

    Philanthropist. The school system in Hershey, Pennsylvania has never once used a public dollar because he has funded it out of a trust for a hundred plus years.

    [JP] (46:53 - 46:53)

    Seems like a sanctuary.

    [Dillon] (46:55 - 47:01)

    Whoa. They're all very entitled children in Hershey, Pennsylvania.

    [JP] (47:01 - 47:03)

    You bring us right back.

    [Dillon] (47:06 - 47:16)

    Jeff, I love this. I love the positivity at the end. Send me the link to the episode about hot dogs.

    I want to learn everything I can. There's one on that one.

    [Jeff] (47:16 - 47:18)

    There's a good one on that one. It's a great show.

    [JP] (47:18 - 47:23)

    Like if we eat a hot dog, are we now a hot dog, right? To get philosophical on it.

    [Rob] (47:23 - 47:30)

    No, we're going to start talking the ship of Theseus. Did Baudrillard talk about that? Don't get me started on the ship of Theseus.

    [Dillon] (47:32 - 47:38)

    Okay. Jeff, that is our time. Come back soon.

    Let's debate something else. Thanks, Jeff. Let's do it.

    Thanks, everybody.

    [VO] (47:47 - 48:20)

    Please note that the views expressed in these conversations are attributed only to those individuals on this recording and do not necessarily reflect the views and opinions of their respective employers. For all general inquiries, please reach out via email to hello at lifetime value, media.com. To learn more about advertising on the daily standup and the lifetime value media network, please reach out via email to advertising at lifetime value, media.com.

    Find us on YouTube at lifetime value and find us on the social at lifetime value media. Until next time.

  • Do you have a story to tell, an opinion to share?

    Join us on The Daily Standup.

Next
Next

#270: Be yourself on purpose w/ Andrea Wojnicki