Transcript
This has been generated by AI and optimized by a human.
Jess Carter (00:04):
The power of data is undeniable and unharnessed, it's nothing but chaos.
(00:09):
The amount of data was crazy.
(00:11):
Can I trust it?
(00:12):
You will waste money.
(00:14):
Held together with duct tape.
(00:15):
Doomed to failure.
Jess Carter (00:16):
This season, we're solving problems in real time to reveal the art of the possible, making data your ally, using it to lead with confidence and clarity, helping communities and people thrive. This is Data-Driven Leadership, a show by Resultant.
Hey everyone. Welcome back to Data-Driven Leadership. We talked about AI a lot on this podcast recently, but today's conversation is one I'm really excited about. I know I say that a lot. This time, I really do mean it. Joining me as someone I've had a chance to work closely with here at Resultant for the past two and a half years, Sindhu Venkata.
What makes this conversation different is that I've seen Sindhu's expertise with AI up close, both internally and externally, but just as importantly, I've seen her humility. AI is evolving so quickly and she approaches it with a rare mix of deep knowledge, curiosity, and honesty about what we know and what we don't.
(01:10):
That's why I trust her perspective and why this is such an interesting conversation to listen in on. Today, we're focusing on what it really means to lead in an AI-driven world when the technology is moving faster than strategy, tools are everywhere, and leaders are being asked to create real value before the rules are fully written. Sindhu brings a grounded, real-world lens on how to think about AI as it becomes core infrastructure, not just another initiative. This is one of those conversations that challenges assumptions, sparks new thinking, and leaves you with practical takeaways you can really use. So let's jump in.
Welcome back to Data-Driven Leadership. I'm your host, Jess Carter. Today we have Sindhu Venkata, vice president of technical delivery here at Resultant. Let's get into it. Sindhu, welcome.
Sindhu Venkata (01:57):
Thank you, Jess. Thank you for having me here.
Jess Carter (01:57):
Does it feel like wildly overdue for us to chat on the podcast?
Sindhu Venkata (02:02):
I mean, we chat every day, so it feels like we are on a podcast on the regular.
Jess Carter (02:09):
We have our own.
Sindhu Venkata (02:10):
That’s right. We have our own.
Jess Carter (02:11):
That's right. I am so excited to have you chatting through some of these topics today because you're such an expert. You help so many clients. You help our own firm, and I think you have this really interesting depth of what's going on academically and in research and the newest technologies, but also how do we hone it with clients and with ourselves. So there's just this pragmatist to you that I really appreciate, and your humor also makes it fun. So no pressure, but you have to actually demonstrate how funny you are on this podcast.
Sindhu Venkata (02:39):
Okay, no pressure.
Jess Carter (02:42):
Alright, so first of all, we've worked together a lot in the last two years. Is that right?
Sindhu Venkata
Two, two and a half years.
Jess Carter
Yeah, yeah. Okay. But for people who don't know you, let's talk about what drew you into technology or how did leadership become a natural step for you.
Sindhu Venkata (02:57):
What drew me into technology, very honestly, if you're growing up in India, when I was growing up, you don't have an option. You're like, hey, do I become an engineer or a doctor? So I would say that that was one of the reasons why I got into technology. But I think what kept me in the field was the very logical way you solve problems. Whether it is the, so the very famous hello world problem, or whether it is, hey, I'm building this network circuit for X, Y, and Z reasons. I think what kept me in the field was the very logical way that we think about things to solve problems and just the logical way even technology is built to solve these problems, which is changing rapidly, which is our topic today, which is AI. Sometimes you're like, what is the logic behind it?
So I feel like that is what kept me in the realm, and when it comes to leadership, I think it is when you are surrounded by very smart people who want to do the right things, it just naturally comes to you. I think I'm surrounded by amazing leaders within the team, leaders at Resultant, and peers like you. So I think leadership comes naturally when everyone wants to solve the same problem, do their best, it just comes naturally to everyone.
Jess Carter (04:17):
I don't know that everyone would say that, so I appreciate you saying that, but I do think I've gotten to see firsthand some of your leadership style and your approach, and I just really appreciate some of the logic behind how you show up. And one of my favorite things about your style is that it's built on competency. So you lead things that you've done really well, so you can coach and mentor and actually lead that. You have the competencies to understand the details and why they matter and when they matter. I think that there's lots of different ways to lead, but I think it's really helpful for the people following if they believe that Sindhu has been in their chair before doing the things they're doing before and can help coach them through it.
Sindhu Venkata
Well, thank you for that.
Jess Carter
Yeah. Well, so then let's talk about AI. So first, how does AI impact you and your life and your role right now?
Sindhu Venkata (05:01):
Right now, it's in the forefront. It's on top of everyone's mind, and when I kept hearing people say, AI is going to become electricity that you don't know, you don't wake up every day and you're like, hey, yes, I have electricity check, I have water check. It's just a utility, and AI will become that. I was very skeptical. I was like, that's not going to happen.
(05:20):
Because as a very logical person, you're like, I want to know how you're thinking. I want to know how this is operating before I can leverage it.
It started very small. It really started with, I think that was a show or something like that. And because English is not my native language growing up, I'm like, what do they really mean? And sometimes when you Google it, it gives you very standard definitions. So I actually opened up ChatGPT, and I said, this is what this person in the show said, can you translate it for me? And then that is how it translates. They're like, this is what it could mean, and it's the funny context and it's all of it. And it's like, okay, yeah, great. Then it also came to, and I'm kind of opening the kimono here a little bit, the way I send emails, I am very, as you know, I'm very straightforward and I'm like, we should do this and we shouldn't do that. And sometimes that tone doesn't really sit well with everyone.
Jess Carter (06:15):
Sure.
Sindhu Venkata (06:15):
So I would go to ChatGPT and be like, can you please soften the tone? And it would beautifully soften the tone with no complexity and describe what I'm trying to say very eloquently, and I'm like, fantastic. So that's how I started using it.
Now I have multiple threads running simultaneously. One that is work related, one that's my kids' school related, one that's just personal life and things that are consistently running. And only because I know that if I spend time crafting a message to my daughter's teacher or asking the basketball coach, hey, can we move the schedules? Thinking about how do I draft it, how do I craft it? How does this look? It's just going to take up my time. So now I have multiple threads or agents or GPTs running that really understand how I think and how I speak and just do some of these very manual tasks and take it off my plate.
I've also incidentally taught my mom to use ChatGPT, which was a very dangerous thing to do because now she'll take a picture and she'd be like, can you tell me how to declutter the space? And then she'll be like, this is what ChatGPT says. It says that you have this. I'm just like, mistake.
Jess Carter (07:34):
Yeah. Maybe we should not teach moms how to use ChatGPT. Yes. At least around visiting our homes. If you're coming to my home soon, you're not allowed to use ChatGPT to evaluate if I have clutter.
Sindhu Venkata (07:46):
Everything. She's like, oh, you bought these oranges. I think they'll be cheaper here. ChatGPT says that they, I'm like, no, we're not doing this.
Jess Carter (07:53):
That's amazing. So there are pros and cons is what you're saying.
Sindhu Venkata (07:57):
There are pros and cons. Yes.
Jess Carter (07:59):
Yes. Well, okay. This is interesting. I've had a couple of instances where I see an email come through and somebody forgets to copy the right part of it. So you'll see, would you like me to write it differently for you? Would you like it to be more concise or softened even further and in your head, is that a ding on them? I'm kind of like here for it. You know what? They're trying it out. They're figuring it out. I don't feel embarrassed for them, but how does that greet you?
Sindhu Venkata (08:29):
To me, that is what we are defining as work slop, and if you actually look at it, people are under a lot of pressure because there's this whole speed trap that is happening. Everyone is telling us, hey, we need to use AI. We need to leverage AI because they will eat our lunch. But no one is talking about how are they going to eat our lunch? We know that when you look at ChatGPT, the adoption was phenomenal. It was 800 million users in 17 weeks. It took the internet 23 years to get there.
Jess Carter (08:59):
Right.
Sindhu Venkata (09:00):
There's definitely the true technology adoption. All the big tech companies are spending north of 200 billion just for the infrastructure. They know this is coming, but at the same time as they're investing and as the technology is moving fast, almost every company that has leveraged it is saying that we are not seeing those productivity gains. We are not seeing the ROI. And I believe the reason is this whole speed trap as to we need to use AI, but we are not giving people the tools to use it intentionally, which is why these emails that you see, and sometimes it'll even say insert name and someone just copies and pastes it and I'm like, insert who’s name? And I've done those mistakes too. When you're in a hurry and you're like, okay, I just need to get this out and I don't read it, and I'm like, I actually didn't mean to say that. I meant to say something else. So the speed trap is one thing. It's because people are like, we need to use AI, but they're not giving the intentionality to use AI.
(10:04):
When you actually look at where does AI actually help people, you're seeing people do it in different ways. This is a perfect example. I'm just taking something and I'm pushing it, but there are more dangerous ways this can impact us because, okay, let's think about what I'm doing and I'm hoping our Chief Marketing Officer, Chelsea doesn't hear this because she'll kill me. First is task expansion. Me in my role, I have certain responsibilities and certain tasks I need to do. Now that I have this powerful agent, I'm expanding my tasks to doing more than that.
(10:37):
I'm like, okay, I put my thoughts on what this offering or this technology is supposed to do for a client, but hey, Claude or ChatGPT make this a marketing-friendly slide deck. So there's immediate task expansion. The second thing that happens is there are blurred boundaries. You are no longer saying, okay, I'm spending an hour to put this deck together. You're going to go on lunch, or you're going to be driving and you're going to be talking to your agent to be like, oh, add this or add that. So the boundaries of your work confines are also really blurring. Then the third one is you're constantly multitasking. If I'm creating that deck that, again, I'm sure I'm going to hear from Chelsea right after this podcast to like, hey, create this pitch deck. Here are the Resultant brand guidelines and everything. I'm also like, okay, it's taking time to do this.
(11:29):
Let me do something else too. What that creates is fatigue. It creates fatigue because we are doing other tasks that are not in the scope of our responsibility, which is good. We are also blurring these boundaries and we are constantly working with our agents and we are multitasking, and with that fatigue comes all those silly mistakes. I'm doing this, I'm doing it efficiently, but am actually using the, I don't want to say human in the loop, but I'm actually using my true cognitive skills and judgment to actually go and look at all of this. No, I'm not. Because I'm doing things faster. I'm doing things constantly and I'm doing more things and I'm feeling empowered and efficient, and that's why I'm also seeing the sloppiness.
When we add intention to all of those, Hey, great, you can do it more productively and more efficiently, some of that sloppiness would stop, but that intention will mean different things to different teams, different things to different organizations, and it has to come from the top versus “Let's invest in AI.” What does an investment mean? How do you want to use it? All of that intentionality has to be set so that we don't create that workflow.
Jess Carter (12:37):
I'm glad you introduced the concept of slop. I don't think we've talked about it on the podcast yet. It's why prompt engineering is important, is the more thoughtfulness you put into what you're asking it for, the better the outcome, and we all know that, but you better believe several times a week. I still give it a crappy prompt because I’m rushing, right? And I think what bothers me from an adoption perspective is you hit a wall because it's not good enough and then you have to iterate or ask it to start over and it starts to feel like I'll just do it myself and it's because I was lazy. I didn't give it the right prompt at the beginning, and so I'm curious what this does to work expectations.
Does the quality of output look more drafty and in nature and is that maybe even aligned to some OC M that they don't have to be perfect to start to talk about shipping it and getting feedback? So I'm not like villainizing work slop and I don't think you are either, but I think it's a natural part of this conversation and the goal is you have to right-size it. You can't just be sloppy, right?
Sindhu Venkata (13:35):
The analogy I'd like to use is in basketball, if a player gets faster, you're going to have to change the strategy on how he's going to execute. You're not just going to say, run up and down the field. That's exactly what you're going to do. Right? Then he's also going to hit that ball because he's only going to keep running. He's not going to look at because he's now used to it, he's now become complacent.
(13:56):
When you have a faster player, what do you do? You redefine the play. You redefine this position, you redefine how the entire team works, and that's exactly what we should be thinking about. If you are getting more productive and efficient, you're thinking only quantity, but you're not thinking quality. How do you bring that into play?
I believe work slop is not technology failure. I think you and I are saying the same thing. It's not because the agent or the AI or the tool we are using is not doing it properly, it’s because we are not prompting it properly. It's technically a management failure because we are not giving the guard rails, we are not giving that intentionality to people to be like, hey, use this with intention. The example I try to use always is like assume you have interns, right?
(14:42):
And I'll give you the story of when I had my first set of interns and I said, hey, I would like to have an Excel sheet with all of these rows and columns and this is what I want to present. And he says, I got it. He went and he did provide an Excel sheet with the rows and columns that I asked for, but it had no data. Because literally, right? So what do you do? You don't just tell the intern, oh, wait a second. Why did you do this? You're like, oh, I need to coach it better. I need to give more data, more structure, more feedback and everything. I need to coach the intern better and I need to give more direction. That's exactly how I think when I think about AI too, and then the next time I tell him, give me a sheet with these rows and columns, he's not going to come with a blank sheet because he's already understood me the first time. He's like, oh, she expects this data. She expects this thing. This is how I modified it. That is how I look at AI, too. The work slop is not because there's a technology failure, it's only because we are running too fast. We are basically prioritizing quantity over quality and we are not giving the right instructions. So when I tell our teams, when you think about AI, I think that you have 10 interns at $30 a month, right?
(16:00):
That’s enterprise license. Just leverage those interns, but be intentional in the direction that you're giving them. The other shift that I believe will happen, which is why we, especially at Resultant, because we are a smaller firm and we can add that intentionality, we can coach and guide people. The other thing that's going to happen is you're going to use a lot of judgment versus your experience.
So if I ask ChatGPT to produce this, the pitch deck that Chelsea's going to throw in trash, I'm just going to go look at it and I'm like, okay, I'm using my judgment. It makes sense, it makes sense. But if I had to create that pitch from scratch, I would be using my thought process. It would be an iterative thing. I'm putting something on paper. I'm like, no, that doesn't make sense. Scratching it out, and then I'm using my experience and my cognitive skills together to create that output. But when I'm just asking you to create it, I'm just going and making a judgment call of yeah, that makes sense and that doesn't, I'm not leveraging my experience that much. What that creates is, for people at our level, we are like, okay, let's take a step back and be like, did I create my story arc before I actually asked this AI intern of mine to do it? So that we are bringing that experience and that cognitive ability together. However, we can fall into the trap where we are asking junior and mid-level resources to use AI and they're not gaining the experience.
(17:26):
So in few years, what are you going to have? You're going to have managers who are managing work that didn't have experience building.
Jess Carter:
That's right.
Sindhu Venkata:
They only use the judgment call. So it really again goes back to how do we set that intentionality? How do we coach our people to use it meaningfully so that when they rise up the ranks with this new technology landscape that we have, we are also giving them the opportunity to gain the experience and just not asking them to make judgment calls along the way.
Jess Carter (17:57):
You mentioned the enterprise licenses. You and I know, but I don't know that everyone knows that there are three companies that have the infrastructure to do everything that's happening. Then there are sort of these mid-level companies supporting it, and then there are just literally thousands of companies who have added their own AI feature to their tools and their software, and so what I imagine if you don't work at an agnostic tech consulting firm is people are drowning in product licenses or things that they want to buy or things that, hey, did you know that our sales tool, our CRM has an AI agent built in? Did you know that your ERP has an AI agent built in? Did you know that you can build it in your dashboarding tool? Tableau has AI built in or Power BI, and so they're trying to figure out where do I look right now?
(18:44):
Because if I look all around me, everyone wants me to buy their tool and swears it'll be helpful. Now, you and I also understand that a lot of the world's adoption is starting with the most accessible items, so this is like ChatGPT, Claude, Gemini, broad useful LLMs that you can access. So one of my questions for you is, how would you coach a friend of ours if we were out to dinner with a girlfriend and she was a chief at a mid-size firm and she's got all of these products lining up for her to just try their AI tool, but she's also got decent data in-house and she's got a warehouse and some visualizations. Do you double down on the licenses? Do you double down on internal capabilities? What would you tell our friend?
Sindhu Venkata (19:35):
She can go either path. I think it ultimately boils down to what is your company's strategy? What is your company, what does your organization do? And then take a step back and be like, if I were to invest in something, whether it's AI or tooling or whatever, would I want to invest in improving processes, reducing costs, or actually transforming my organization? Right? And then depending upon what she decides, if she says, hey, I run say a small sum of anything like food supply or anything like that, I want to basically increase efficiency and order taking and all of that,
Jess Carter (20:14):
Right?
Sindhu Venkata (20:15):
You're not doing something transformational for your business itself because your business is going to remain the same,
(20:21):
But you want to increase efficiency and you're like, I already use this tool set. Then the recommendation would be, okay, you already have these tool sets and you're not transforming your business as a whole. Use what's out there. Your licensing existing, whatever's out of the box is sufficient for you to gain that efficiency. Again, we'll have to do a lot more, but then if you have an organization that is completely transformational, failing on examples, but let's say a call center, yeah, you are going to be completely transforming the way you're not increasing efficiency. It's not about the number of calls Sindhu or Jess can take today. It's going to be how am I going to, because your business model is serving the customers. If you are going to transform that, then that would be more of a cannibalizing the way you do business and here is how you actually build AI or build it ground up.
(21:13):
One of the things that I have seen a difference in how companies approach AI, whether they're doing transformational or just improving operational efficiency is, they constantly think about, oh, if I use this tool and add it to the existing process I had, it's enough and I'll be transformational, but that's not true. The question is not about, oh, how can I use AI in this process? And honestly, when we think like that, when we are creating content or when we are doing this or whether it's marketing, whether it's service delivery, you think about, okay, now can I use AI to actually generate this data of script or how can I use AI to actually generate the podcast content or any of those things, but what companies who want to transform it would look like if nothing existed today, I am the company and all my processes and tools, and if they didn't exist today as they are and I know that the technology of today exists, how would I actually do it? That is the true transformational way of thinking about how will this technology actually transform or cannibalize my business? Not a lot of organizations are doing that.
Jess Carter (22:16):
I have not heard anyone else say that as eloquently as you just said it. Tools and licenses are not bad. They're not the only way and they're probably not the way to transform, but not everything needs to transform.
Sindhu Venkata (22:31):
Right.
Jess Carter (22:31):
That's extremely well said. There are so many people that I think needed to hear, needed to hear that. Then there's going to be pieces of your organization that may need transformation and pieces that may just need some licensing for a tool or two. That makes a plethora of sense to me, so okay. There are conversations happening about personal and ethical considerations around AI. We're hearing people worried about the environment, we're hearing people worried about data centers. Where we run our headquarters is in Indiana and there are a whole bunch of legislation right now about data centers in Indiana and northeast or west I think, and so I need to go read it. There's also one of the things I am curious about is AI, to your perfect example, is this really try-hard intern and it wants so bad that when Sindhu's like, I need this thing, that it gives you something. That what people are noticing is what is true isn't always its priority.
(23:27):
It will give you something as opposed to giving you nothing more times than not. And so one of my questions for you is, as we have people getting accustomed to AI who maybe didn't work in engineering and data science the way you and I did before, so this is a natural progression, we understand a little bit more about how these things work. How would you coach kiddos? How would you coach friends that are getting into it to think about ethical use, and those are totally different examples of environmental. We both care about the environment. We think AI is important, so how do you balance that and/or what if someone goes to it and wants to know something that's true and they don't understand how to look for a hallucination or how to understand the risk of that?
Sindhu Venkata (24:09):
That is what honestly keeps me up at night, okay, because things like slop and judgment are things that we know we can watch, we can maintain, right? And you can look at the quality of the output and actually evaluate and measure. But what I call that is a trust erosion. Yes. Sometimes I'll go and ask ChatGPT something and he'll say all this, and I'm like, but wait a second. Microsoft didn't even exist in this year. Oh my bad. Yes, I give it, and the difference between an intern and this AI, which also we are equating to an intern is you can ask the intern, what was your thought process when you did that? Right? When I said, go and bring rows and columns, what was your thought process when you put this data and what was your thought process? But you just cannot do that with AI.
(24:59):
The only thing that I believe, and again, there are a lot of studies out there and there is a lot of, even the technology, the grounding techniques like preventing hallucination, all that is changing, but the one thing I tell people is that don't ride shotgun. Be the driver. If you are the driver versus you're riding because if you're a passenger you're like, oh yeah, I think the plane is going in the right direction, the car is going in the right direction, but if you are in the driver's seat, then you are basically dictating where you're going, which streets you're taking, which roads you're taking, which direction you are following to give you more of that trust because what is going to happen is yes, you have to be skeptical, but if you are in the driver's seat, then you know where you're steering your vehicle, this is what I'm doing. You're not blindly just sitting in the car and being like, okay, I'm going to go from destination A to destination B, but wait a second. It was not the right thing. The destination is wrong. We went all over the country to actually get to the next block.
(26:02):
The other thing I would say is that when you are in the pilot seat, using your best judgment, using your evaluation, if something feels off, go research it. Don't take it for what it says. We can't tell AI to be, be honest. It's not going to because it's like I am being honest. What are you saying, Sindhu? I'm going to shut you off from my instance immediately. So that's where I guide the people. I actually do this to myself. I even tell my kids if they ever use AI as that, drive it. Don't just tell it, go do this research, drive it, say, this is what I want. Here is what I heard. Here are papers I'm reading, ground it. And the other thing is if the trust erodes, then we are not going to be able to leverage it. If I don't trust and you don't trust, we are using AI, but if we see that it's constantly giving us trust issues, we are like, okay, we are not going to leverage you anymore. We are not going to talk to you. We're just going to be like, sorry, this is not happening. It's like that aunt in Thanksgiving constantly doing something that isn't true. We have trust issues. We all have that person in our family. We never just see anything coming out of that person's mouth, so we don't want AI to get there.
Jess Carter (27:17):
Well, and I've seen a few people in academia that they'll use in their prompts things like, hey, for this, I need a literature review and I do want links to every article and I want to be able to open them and see them. And so it's like when you need to make sure that you're finding articles that you don't want to end up in, I don't know, a public bill or something, that you can actually check that and make it easier for yourself to verify what it's saying. I really appreciate the way that you understand so much of the technology and that you still have a few things that keep you up at night to figure out how do we move forward together, so if this has been extremely insightful. Before we go, is there anything else that we haven't talked about that we should?
Sindhu Venkata (27:55):
What I would tell our readers or anyone who's in a leading position in our organization is trying to lead with AI with intention, the win at AI is like a real problem, but the best competitors in any domain are thinking about, let's reflect, let's review, let's add the structure, intentionality, let's also give room to experiment. Let's just not go do a hundred pilots. Let's actually build that intention. It's not about saying we are going to invest AI in a town hall. That's not going to cut it anymore. It’s about here is how we are going to intentionally leverage AI in our firm, and it can be just small efficiencies and that's fine, too.
Jess Carter (28:37):
Yeah, that's right. Don't have some FOMO because your company doesn't need to over adopt it right now. Yeah, well said. If people want to follow you and hear more about what you have to say or think, how is the best way to keep up with you?
Sindhu Venkata (28:49):
My LinkedIn would be the best way to follow me because I think I've been banned from all the social media sites, but not banned from LinkedIn yet, so I would tell our listeners to follow me there.
Jess Carter (29:02):
We will add your LinkedIn to this show notes so they could follow you. Sindhu, thanks so much for joining us.
Sindhu Venkata:
Thank you so much for having me. This conversation was fun.
Jess Carter (29:11):
Thank you for listening. I'm your host Jess Carter, and don't forget to follow the Data-Driven Leadership wherever you get your podcast and rate and review, letting us know how these data topics are transforming your business. We can't wait for you to join us on the next episode.
Insights delivered to your inbox