Data Driven Leadership

What Every Leader Needs to Know About Sustainable AI Adoption: Insights from a Fortune 500 AI Executive

Guest: Patrick McQuillan, Global Head of AI and Data Governance, Fortune 500 Company

In this episode of Data-Driven Leadership, Jess Carter sits down with Patrick McQuillan, global head of AI and data governance at a Fortune 500 company to discuss how leaders can adopt AI thoughtfully and sustainably. Patrick shares his expertise on balancing rapid innovation with long-term success, the critical role of AI governance, and how organizations can avoid common pitfalls in their AI journey. 

Listen On

  |  
YouTube video player

Overview

"Slow is smooth, smooth is fast." This saying rings true when it comes to AI adoption in today’s fast-paced world. While everyone seems to be rushing toward the next big AI breakthrough, the reality is that hasty decisions often lead to costly mistakes.

In this episode of Data-Driven Leadership, Jess Carter sits down with Patrick McQuillan, global head of AI and data governance at a Fortune 500 company to discuss how leaders can adopt AI thoughtfully and sustainably. Patrick shares his expertise on balancing rapid innovation with long-term success, the critical role of AI governance, and how organizations can avoid common pitfalls in their AI journey. 

Tune in to learn how a strategic, measured approach to AI can drive lasting impact for both businesses and their customers.

In this episode, you’ll learn:

  • How AI governance reduces failure rates and protects trust
  • Why data foundations matter more than new models
  • What leaders should ask before they fund or scale AI

In this podcast:

  • [00:00-05:50] Introduction to the episode with Patrick McQuillan
  • [05:50-12:41] Guidance for early AI adopters on pace and expectations
  • [12:41-15:28] When AI governance becomes a leadership priority
  • [15:28-20:10] The likelihood of public AI failures
  • [20:10-26:36] Long-term AI adoption
  • [26:36-31:10] AI in higher education and job readiness
  • [31:10-37:55] Responsible use of GenAI tools in the workplace

Our Guest

Patrick McQuillan

Patrick McQuillan

Follow On  |  

Pat McQuillan is passionate about data and AI as tools for driving effective change and decision-making. He has held global data and AI leadership roles across various Fortune 500 companies. Previously, he led international consulting teams to drive data and AI strategy, technology enablement, and regulatory compliance for Fortune 100, government, and higher education clients. He is also an adjunct professor at the University of Chicago and Northeastern University, where he designs and instructs graduate courses in business, data science, and data governance.

A sought-after subject matter expert in data and AI governance, business intelligence, AI product innovation and risk strategy, Pat earned an MBA from the University of Oxford and a BA in Economics and International Affairs from Northeastern University.

Transcript

This has been generated by AI and optimized by a human. 


Show ID (00:04):

The power of data is undeniable and, unharnessed, it's nothing but chaos.

 

(00:09):

The amount of data was crazy.

 

(00:11):

Can I trust it?

 

(00:12):

You will waste money.

 

(00:14):

Held together with duct tape.

 

(00:15):

Doomed to failure.

 

Jess Carter (00:16):
This season, we're solving problems in real time to reveal the art of the possible, making data your ally, using it to lead with confidence and clarity, helping communities and people thrive. This is Data-Driven Leadership, a show by Resultant. 

 

Jess Carter (00:34): 

Hey guys, welcome back to Data-Driven Leadership. Today, we're going to have a conversation about how AI is reshaping the way we think about leadership, decision making, and strategy. And our guest is Pat McQuillan. He's a leader in responsible AI and data governance for a Fortune 500 company, and he's a professor on this topic. It's a little bit of a longer episode than normal. We try to keep him pretty short, but I can't begin to tell you how important and practical he made this. Content about AI is everywhere. And if you can imagine, we get so many people that want to talk about it.

 

(01:06):

And one of the things we're committed to is trying to make sure that if we're talking about it, that it's really, really actionable. And so this conversation is exactly that. Pat is really, he's a really clear communicator and he has this sort of alternative sense of urgency that is founded in wisdom and pragmatism and discernment about how do you play a long game and win. If you're sprinting during a marathon, you're not going to make it. And I think Pat kind of encourages leaders, board members, data leaders, scientists, and engineers alike to really stop for micro moments of aligning on strategy and long-term game and making sure that you are being thoughtful and considerate about what AI can do, what AI should do, and when you should rely on it. So I think this was a really helpful episode for me. I hope you guys enjoy it.

 

(01:58):

Let's get into it. Pat, welcome.

 

Patrick McQuillan (02:02):

Very happy to be here, Jess. I'm looking forward to the discussion.

 

Jess Carter (02:05):

I'm really excited to dig into AI and leadership, especially from someone who's deep in it every day. One of my first questions might be like, what does your life look like? If you're a leader in responsible AI and data governance for a Fortune 500 company and you're an adjunct professor, what does your week entail?

 

Patrick McQuillan (02:22):

It's very busy and I'm sure no surprise every day is different. But typically, I'll start with the governance aspect and responsible AI aspect. The work, it's really spread out everywhere because it's for this particular company and the companies I've worked with in the past, it is always all products, all geographies. So if you're operating in 190 countries and you have 200 products, you're looking up the data and AI for all of those. And it's across the board and governance definitely bleeds into many areas beyond risk. Each company does governance a little differently, but the way I've typically structured it and where I've built it out, where I've built my teams and my operations tends to be across working with product teams, working with data scientists and engineers to make sure the right infrastructure and the right systems integrations and all of those good things are in place, right?

 

(03:10):

Metadata is in place. And then also working with legal and risk teams and talking about compliance and new laws coming through and requirements we have to meet, especially in more regulated industries. I think a lot of companies don't realize that even if they're not heavily regulated, most likely if they're operating in the EU, they still are heavily regulated, even if their industry isn't heavily regulated. So people tend to be subject to things they're not aware of. So there tends to be those conversations with product, with engineers, with legal and risk.

 So the days vary quite a bit in terms of strategy, technical engineering discussions or platforming discussions or consultations with legal, developing our own policies, working with legal to develop their policies, doing compliance checks, risk assessments, and making sure that we basically have a paper trail from start to finish, that everything's working the way it needs to be, and then we know what success looks like.

 

(03:59):

We're constantly iterating on that over time. And then the adjunct stuff is just really fun. So historically I've taught at a few different universities and I particularly really enjoy Chicago because not only am I teaching graduate students and professionals and a few different chief AI officer programs and things like that, but I also get to design my own content in courses. So it's been really, really fun and rewarding. And I was giving a lecture last week and the chief data officer of the large Fortune 500 company out here in San Francisco is actually one of my students. And we started chatting and I was like, "I remember you from so- and-so." And they were like, "Yeah." And we started talking. And it's a small world and you get to have a lot of really refreshing conversations on the cutting edge of things that people will solve for.

 

(04:40):

So it's very interesting

 

Jess Carter (04:41):

Work. That's super cool. And it makes sense then, you're right, that one of the things I'm figuring out as we also watch from a technology consulting firm, the landscape of AI adoption is you do have this need for everything you just said, this governance in place for responsible use and compliance and risk prevention. And it's like you have these stakeholders, this AI literacy, this the early adopters at the very normal stage, people who are leaning in, people who are cautious and waiting for the right policies to come out or legislations to come out to understand what's appropriate or not, which will take a minute. But it's been interesting to see that sprawl in the middle of 2025, just the depth of sprawl of when we talk about AI, every single episode, every conversation, wildly different. To your point too, there's a little bit of...I've almost had to start saying when someone says, "Well, I'm in AI." I was like, "Well, say more." What do you mean you're in AI?

 

(05:36):

Because it's like, are you building the infrastructure? Are you building the capabilities? Are you designing the strategy and framework? Are you building an agent this week? What does it look like for...And to your point, what it means is varying substantially. So then one of my questions for you is, for those that are maybe early adopters, maybe you've got, I'm sure you have some students that are probably like, "I want to go. " And maybe they're in SF or Chicago or New York, and that's the expectation is that they're two years ahead of everybody else. How do you guide them on pace? Because there is this governance piece that kind of has to move a little bit slower, but how do you coach somebody on if you see them and they've got that fire in their eyes, what advice do you give them?

 

Patrick McQuillan (06:19):

Sure. So for a student starter, I've noticed a lot of people I've recently hired on my team, students that I teach, a lot of parts of the younger generation are trying to get involved in this type of thing. It's a very exciting field and it's becoming more and more complex. But one area that has affected students and leaders both over the last five years is the advent of generative AI has made data science a very accessible field, which is good in some respects because you're not always just learning to write code in Python or R or something else. But now you can enter basically what you would call a search bar almost like in GPT or Perplexity or Brock or any of these other tools blood and suddenly you're vibe coding. Suddenly you're developing something without any technical knowledge. Philosophically, you have that technical knowledge, but you're not in tune to what the underlying neural network is built on, how does it operate, what's been trained on.

 

(07:17):

You're just typing in words. And so it's more of a logical type of approach. So traditional data science, I feel like folks, I get very excited because they're trained on both sides of that, the technical aspect and then the philosophical type of more generative aspect, both of which can contribute to GenTech AI depending on the system you're building. But the advice I would give is folks are very eager to build and they're eager to build something good. And the two things I would say, one is very competent advice and truth here a lot on the show, which is perfect as the enemy of good, to make sure that unlike an exam, your goal here is not to build something perfect. Your goal here is to build something that's going to work. It's going to be a car. There will things that will go wrong with it.

 

(07:56):

You're going to change the trip plans. You are not the master of this tool. There is always someone funding it or a manager's manager and it's part of a bigger picture. So do not sacrifice a large amount of additional time to make it perfect. If it works, if you're able to put in the right controls and quality assurance and it gets you 95, 99 percent there, that is okay and expect it to change and be disassembled and reassembled and be comfortable with that. It is always an art at the end of the day. It is an art wrapped around a science. 

But then the second bit of advice I would give is always ask questions. When I was a data scientist back in earlier in my career, I just said yes. I just wrote the code and I pumped it out and I was wondering why I would notice they were redundant models we were building.

 

(08:40):

I noticed there was work that maybe was invested in different places by the same person and was doing very similar things and you'd have higher failure rates because we were rushing to get stuff out. And if I had asked more questions, I could have even maybe had my manager realize we could consolidate some of this work and basically increase our success rate to get things deployed for the leadership side of things. So this is something that I've been very outspoken about where I understand the pace of the market requires rapid innovation. That is an unavoidable truth and it's a pressure that's always sitting on boardrooms, on shareholder meetings and with C-suite. But the reality is there's so much rapid adoption of new technology to the point where, and I was a consultant before this, so I definitely saw this spread across the board, by the end of this quarter, we need to have X, Y, Z.

 

(09:29):

And it would be like, "Oh, we need a hundred AI models in this type of AI because we promised the shareholders." And that gives you a short-term spike in the stock price that gives you some media attention. Almost all of that value is eroded over time in the value of your stock. That usually doesn't carry it to long-term value. So if you're chasing short-term goals, you're going to get short-term results. And you're also going to get higher failure rates because these teams are being given three to four months to go from start to finish and make a miracle. You're going to have 70, 80 percent failure rates on these deployments and you see it all the time. Most things don't get through POC. That is falsely unassumed truth in the industry the best that you should expect. But what I've seen, and through effective governance, you have to balance out that water to feed your thirst, which I understand has to happen with investing in more long-term sustainable AI projects because what's the value in this if it's going to be gone in six months?

 

(10:26):

We can't keep drinking a dart board and base the company's valuation on that. So you can get a 30 percent failure rate and drop that substantially if you just think long-term. Let's build out the platforms and the infrastructure to support this. Let's invest in the data that's feeding into this AI. AI is nothing more than data filtered through logic. So how do we invest in the good data and the good platforms? Most AI code and ML on the machine learning can be written in a few lines of code, and it's maybe 80 percent of the code before that is just prepping the data, is making something sustainably architected to provide long-term value, and it has good guardrails in place so that it can actually live onward and set an example for your process.

 

Jess Carter (11:07):

For the people who didn't spend time in front of LLMs, it's not dissimilar. The data cleaning isn't the sexy work, but it is what generates the most valuable outcomes. So it's like people just thought they could sit on top of, be a data scientist. And it's like, if you don't care about the cleanliness of your data, you don't actually understand the value.

 

Patrick McQuillan (11:25):

That's exactly right. I think people are naturally inclined to be bored by maintenance. No one wants to take your car to the shop. Nobody wants to ... I mean, I hope we all brush our teeth at the end of the night, but it's not the most exciting part of the day. Maintaining things tends to be quite boring. And in the corporate environment, it's hard to make a selling point for investment. But at the end of the day, that all goes to the side when the foundational truth is we see the results of what happens if that investment doesn't happen, boring or not, unexciting or not, is the type of thing where ... So I manage the data catalog for the company I work for as well. And I recently have convinced quite a bit of the organization to start really bringing it to the next level.

 

(12:11):

And a gentleman, no one wants to talk about cataloging your data or metadata and knowing where your PI is durability, right?

 

Jess Carter (12:17):

No.

 

Patrick McQuillan (12:18):

No. And he goes, "Can you look in this deck I made?" And the title of the deck is The Data Catalog: The Least Sexy Thing You Wish You Heard of Sooner. And it's true. And leadership is fully on board. People love it, but…

 

Jess Carter (12:30):

Wow.

 

Patrick McQuillan (12:31):

The fact of the matter is people don't intuitively think that maintenance is so important, but it's you don’t want to shop at a restaurant if everything's dirty no matter how good their products are.

 

Jess Carter (12:41):

Wow. Okay. There's just so many different aspects of this that I want to ask you about because I'm thinking you do work for a Fortune 500 company, so it's a larger firm. One of my questions for firms that aren't so big, firms that aren't so successful yet, they're trying to get their arms around AI. And I think there's an important thing we've kind of implied, but we haven't said out right. And I think it's interesting, which is what you're talking about, the level of governance that's required to do this well in a sustainable way does require organization-wide buy-in, right?

 

Patrick McQuillan (13:13):

Yes.

 

Jess Carter (13:14):

And a lot of entities I think are like, "Oh, my CIO will figure this out. " If your department wants to take it on, you take it on. It sounds like if you want a sustainable model, at some point it's going to have to be a leadership priority. Is that fair?

 

Patrick McQuillan (13:29):

I completely agree. It's not only fair, I would split it into two categories: highly regulated companies like financial services, defense, energy, and not highly regulated like e-commerce or manufacturing in terms of data and AI agrees. And e-commerce, manufacturing, the less regulated ones, which I’ve worked for, I've had as clients, they get away with a lot more. And sometimes lack of effective governance can sometimes just be a small fee you pay as a price of doing business. I wouldn't say that's a good business practice, but it's what people do. We see it all the time. But at the end of the day, like I mentioned, regulators are starting to crack down. We have the EU AI Act, which is coming into effect. But it's similar like when GMPR came out and all of a sudden everyone had to start caring because if you do business in Europe, even if it's just localized governance, sometimes what I've seen companies do is the European team will have its standalone governance function and the rest of the business, maybe governance is more performance analytics, quality of data, things like that.

 

(14:25):

And if they want to take that route, that's okay. Really the best value add for doing more than that is just establishing trust with your customers to make sure their data's being treated well and that it's not being shared or that AI is doing anything wrong that shouldn't be. With highly regulated companies, of course, it's a totally different story and that absolutely has to be centralized. It should be a board-level decision of how governance is going to be oriented and often individuals within those companies who hold certain titles are actually legally liable as individuals separate from the business to answer to certain behavior that AI or data's being used for. And sometimes that can get lost on the broader leadership pool and it's only known by that individual and a small group of risk partners and legal partners in there. It needs to be elevated and needs to be understood that this is always going to be an expensive problem.

 

(15:12):

It must be taking enterprise-level approach.

 

Jess Carter (15:15):

Well, that was going to be one of my questions for you, which is we've seen in the news consulting firms who've submitted $50 million engagements that don't make any sense. I'm kind of surprised we haven't seen yet, and maybe we have and I've just missed it, but I'm surprised in a public company, there hasn't been a big whoopsie yet in, oh, we used agentic AI to try and generate something and we were way off. Has that happened and I've missed it or do you just think it's a matter of time?

 

Patrick McQuillan (15:39):

I think it's a matter of time. So I used to work in a regulatory space for many years as an economist and I did a lot of work with the DOJ, a lot of work with the SEC, some European commissions and working with antitrust securities fraud and misuse of data. And typically it takes a while to build a case, but when they build a case, they have it. I would almost look at it with a similar approach to the IRS. They know you did it once, but if you're a repeat offender and they have a paper trail, then it becomes substantially easier, much less expensive to make an example of you. So I think agentic is a little too new, but we've seen it happen with GenAI. And obviously agentic is really just built on GenAI or predictive AI with some potential software components. So it's the actionability of agentic that I think is a differentiating factor of how there's independent decisions being made based on a kind of rapidly changing learning curve.

 

(16:29):

So I expect we'll see some examples.

 

Jess Carter (16:31):

Oh my gosh. Okay. Man, I want to talk to you the second something comes out to help me understand it because it's not like, I mean, one, it is interesting to watch. It is genuinely interesting to watch the world adopt a brand-new impact into the market all over it. That's just an interesting thing to watch happen. But then it's the age old, all the cars slow down where there's an accident. You want to understand what happened, maybe make sure you're not doing that. I think of, are you a fan of The Office? Did you ever watch that show?

 

Patrick McQuillan (17:04):

Of course.

 

Jess Carter (17:04):

I think of the episode where they realized that they're getting a tax break because of somebody who was an offender and they're kind of like, when Kevin is like, "Hey, I've had him explain six times what he did, and it's what I do every day." It's like this sense of like, "What is not allowed now, everybody, because I want to make sure I'm not doing it. " I anticipate that's going to be a theme in the next half decade. Do you think so?

 

Patrick McQuillan (17:31):

I think so too, to a degree. I mean, I think hopefully we don't have too many people having to step out of The Office and start up on alone's cones at the end of the day, but I do think there is a risk. And I think with any type of rapid innovation in the market, you always are going to see some controls eventually being put into place that tend to be very reactive from the market or regulators. They're never proactive. It's always a reactive circumstance. And don't get me wrong, I am all pro innovation. Innovation is lifeblood of the market. But I think the problem, the thing that frustrates me, and what we will always see, I think to some degree, one of the reasons I'm governance, is unbridled innovation, which ends up having a massive amount of failure rates and a massive amount of cash earn.

 

(18:14):

And from a business perspective, that's not great. And I

 

(18:20):

Think that there's a very easy way, if you're not going for that 30 percent return to get 22 percent and have from a 30 percent  guarantee to an 80 percent guarantee this thing's going to roll out, at scale, you're going to do much better as a business overtime. But a lot of the time it's disorganized and it might be a localized team doing this work, making some new products and innovations and research. It's exciting, but usually it's very isolated because of the swift timelines and the way they operate. And having some effective governance, it's not designed to pull back any kind of revenue. Quite opposite, it’s designed to maximize profitability. Let's reduce the cost of the business, get this out quickly, and actually give you guys the tools you need to succeed instead of hoping that your VP isn't sweating at night, that they can't get that ten percent year-on-year revenue increase because they're throwing money at things based on a short-term goal.

 

(19:12):

And that's an exaggerated example, but you need to have controlled innovation in order for innovation to be sustainably successful.

 

Jess Carter (19:19):

It's like commoditized wisdom is what we're looking for. I mean, what you're saying is the age old time. I think about all of the…my career started in a lot of the state agency systems trying to get off mainframe and they were five, six, seven, eight, sometimes nine years deep with nine vendors deep trying to get nine-year-old code life. That was success after…and so there's a little bit of, hey, I really like your sense of unbridled innovation would absolutely yield the, it's the exact same thing with a different technology. That's all it is. And the question is, is that can you afford to behave that way as a business because it looks so exciting, but there's a sense…I really like that sentiment of if you can just take a breath, maybe align it to your business strategy, your risk portfolio and tolerance and actually do it well.

 

(20:10):

Now, when you have a team that wants to embrace AI in a longevity wisdom-filled model, what do you say to that team if they don't have a path? If they don't know where to start and their board and their leadership team are not super into AI yet, how do you guide them?

 

Patrick McQuillan (20:25):

I've encountered a lot of that, actually, and with some very large companies, too. So it's not only just the small scale ones, but even large ones. You see in food industry sometimes where they're just not like, large Fortune 500 food industry companies, but you'll see that type of thinking larger small. At the end of the day, there's basically two approaches you can take. One, if there's budget for it to do a test case with a vendor, it's not going to capture everything. Of course, it's not the shelf solution, but there's a huge need for this now. Everyone's saying there's a reason we're having this conversation. Everyone's asking the same question. And so there's some pretty affordable vendors who you could bring in for responsible AI checks, scoring, can help. Sometimes they offer small consultations for platforming and things like that. You can make a very simple tech stack with some open source and maybe one or two small paid vendor engagements to just get things started.

 

(21:18):

And the best advantage we have is time. So if you nip this in the bud, you save yourself dozens, if not hundreds, or thousands of problems or go down the line, depending on how much you scale. Second, or simply than that, I would just say in the very least, if you're really doing a MacGyver situation, let's say you have a, definitely start with whatever your warehousing tool is. Maybe it's Google Cloud, maybe it's Azure, who knows, whatever it may be. And just make sure the data is cleanly organized and categorized appropriately. Let's say I don't know sales data versus customer service data versus finance, HR data, whatever it may be. Make sure that you have the appropriate metadata in place. A lot of these have metadata scanning tools, or you could even just do a simple crowdsourcing for your most important tables only. You basically want to look for PI.

 

(22:04):

You want to look for data source, you want to look for if it's tied to certain products so that you have discoverability and some sensitivity in mind. And then lastly, when you feed this into your AI models, there are very simple ways to calculate scoring for that AI. So if it's traditional ML, precision, accuracy, recall, drift. If there's ways to test the bias or the fairness against certain protective demographic groups, there are very, very industry standard ways. Some of them actually come out of Chicago. I'm not doing that as a plug because just like they actually have some of the standard ways to do it, the amount in Chicago. And then for generative, there's some very simple Python packages that engineers can use to calculate things like hallucination scores or toxicity scores. And if you have the right benchmarks, a quick Google search, a quick internal reflection on how things are operating over the course of a week, you could probably get something simple spit out that's 80 percent there that it wasn't a week ago.

 

Jess Carter (22:58):

Oh, that's awesome. So my other question is, if you're maybe the early adapter and you want the company to get further ahead, you're impatient, you are frustrated because you want more and more and more. The company's not doing some of the strategic things that you just mentioned. So you've already cautioned the organization in some of that. Imagine some of your students might be in that space, too. What do you say to them?

 

Patrick McQuillan (23:21):

It is a truism a little bit that you're always going to be facing some moral conundrums, I think, when you're working on behalf of a larger organization. I would say be aware that the industry we're in, for the most part, I would say has neutral-to-positive morality in terms of the AI. And one of the reasons I love this industry is because I'm fighting on the side of ethics while seeing a good ROI, which is the selling point, how do we…everybody? But from a data scientist perspective, students coming into the industry, you have to accept with a bit of salt that people will disagree with you, who have been there longer, who carry more clout, and who will hold the purse. And I think that's something that you're going to have to be mindful of and something you're going to have to be prepared to do simply because you can voice your opinion, you can voice your thoughts.

 

(24:11):

And I encourage that, but make sure that it's constructive, to make sure it's toward getting a higher ROI, it's toward preserving profitability. And if you find a way to, again, reduce a failure rate of AI POCs, that in and of itself is a proving point to install some form of governance more effectively. But, if there's not the appetite or if leadership just isn't, they are educated, they're just not interested, they don't have the budget, whatever it may be, a promise was made, you're just going to unfortunately have to accept that in the short term. And the motivation that I would offer is as you progress in your career, you start holding a purse, you start managing that strategy. And we're seeing a strange shift in the market where 10 or 15 years ago, data scientists did not hold senior levels at companies. It was too young of a field.

 

(25:03):

It started that, I mean, people will debate me on this, but let's just say formal data science as a degree program started around 2010, 2012, you're not going to see a chief data officer. It's becoming more and more of a common title. Chief tech officers are starting to take on much more of a role. Chief information officers, chief AI officers, titles that either didn't exist or are ramping up in their role. And data scientists for the first time are becoming directors, VPs, global leaders for the first time. And that's why we're starting to see more of a push toward effective governance and a stabilization in sustainable AI practices, which benefits everybody, the business and the customer and the shareholders. So there's hope for data scientists if they stick through it, give it a few years and you'll start running your radius of influence and you can start shaping things the way we should be.

 

Jess Carter (25:53):

I imagine anyone who's listening to this is just taking a deep sigh because there's a…what I appreciate in such a, I'm going to use the word bit of a volatile market. I really appreciate the long game that you continue to be kind of committed to playing that it's like, hey, let's all learn there may not be shortcuts even if it looks like there are shortcuts and we have to work together to figure out what is the right duration and what is the right patience and what is the right risk tolerance. All of those things come together to serve an organization, including your own maturity as a leader. I wanted to pivot to ask you a couple questions about UChicago because this is a completely different approach to more questions about AI. And this is my point is the topic is so broad.

 

(26:32):

But one of my questions for you is there's been a lot of emphasis I've noticed the last few, call it months, across the nation and funding sources around what higher ed in AI should look like. And I've been so fascinated to pick your brain because I'm like, okay, you are a professor. I mean, talking about how much do people need to be AI job readiness expectations, but also even how does a professor like yourself, what's your pedagogy on curriculum and retention and development of your student if it's a paper? Can you do that anymore?

 

Patrick McQuillan (27:07):

Yeah. No, it's a very valid question. It is a fascinating time right now. And what I tend to do, and first of all, I think we've all taken statistics at some point in our lives, right? I have very rarely heard, I hope some people listening have had a good statistics professor, but I have heard so many stories of like, it's just so boring or I couldn't and the topic is dry. It is admittedly dry. Before UChicago was teaching at Northeastern University in Boston, at their AI Institute actually, and I was teaching stats as one of my courses and all the professors were like, "Oh man, you're teaching stats this quarter. Oh, I don't want to be you. " And I teach the students and then I'll fall asleep. And I'm saying to myself, "I got to find a way to get to these students." What it comes down to, I would say a couple of things.

 

(27:54):

One, it's understanding that if you have a foundational understanding of the rudimentary building blocks of AI, which is ultimately statistics, then no matter what happens in the industry, you're going to be just fine. It truly is an art. It is what I've described to many, many times my students. Data science and AI, if you're an engineer, if you're the person building this, it is an art where you can see the idea is a science where you build it, and then managing it is an art. And there's no hard truth. Mathematics can lie. People cry over taxes because it's numbers doesn't mean it's true. Anyone can lie with mathematics.

 

(28:35):

So the fact of the matter is if you know the building blocks and you understand how to approximate truth, then no matter what is coming down the pipeline for any new AI innovations, you are going to be valuable. So the skills remain consistent while the application changes, but is very foundationally valuable. I know data scientists who got their degrees in 2008 who are extraordinarily capable of managing the agentic space right now, because it's the same frame of thought. For the more senior students who I have in these executive education programs, this is where a lot of the issues can be solved, where it's a non-technical conversation, but it's funding, it's timelines, it's understanding that AI isn't going away, and you can't simply put those two letters on a slide and ask your team to figure it out. You have to have some understanding of what's required to implement it, set goals, track quality, and ask the right questions.

 

(29:39):

You don't have to be an engineer, but to just say, do we have the right platforms for this? How long is this going to take since we're doing something new? Are we going to have to redo work? How much money will this take and can we get it done in half as much rather than investing in the wrong things or making wild assumptions as a non-technical specialist? Everyone, if you have AI in your title has to be some degree of technically aware, even if it's at a high level. And that the main incongruence usually comes from the high-level conversations because they're so far away from the sausage being made. So I tell my students who are more senior, you can resolve a lot of this whip at the end of that roller coster if you simply know the right questions to ask and engage and listen more actively with these data scientists on your team.

 

Jess Carter (30:26):

Wow. I mean, what I feel like is probably encouraging to people who maybe didn't come up in a computer science undergrad or something is systems thinking, statistics, critical problem solving or problem-solving methodologies that maybe you've learned, Six Sigma, IDEO, but there's also this logistic logic philosophy. It's basically saying your ability to rationationalize, to make sense of things in a fairly programmatic, thoughtful way, to break down a problem and understand how to basically prompt the right things, but appreciate what you're prompting and appreciate the expectations that you should have appropriately to, again, the infrastructure you have, the data cleanliness that's provided, all of that plays a really important role. If you had advice for a leader who doesn't know what they're…There's no policy that's been developed and they've got access to ChatGPT, but they also have their instance of Copilot. But I try to tell people, hey, you have to be thoughtful.

 

(31:25):

Please don't dump your IP into ChatGPT. Please don't dump your proposals in. Don't dump your key intellectual prop. But that's where I'm going to go to Copilot. But even understanding how do you build an agent? It's not actually hard. I've built a couple of them for repeated tasks. So I'm going to tell you what I do. And then I'm kind of curious if you'd give me advice or if you'd tell me, "But, Jess, wait." So in my job, I have a similar job variety as you do. I do different stuff all the time in my business. I've been here for 12 years. I do different stuff every day. What I've tried to do is write down when am I doing the same thing? And when I am, just easy, low-hanging fruit is when I'm doing the same thing, is it IP and brand-specific content?

 

(32:11):

In which case, if I want to try to automate an agent or just a great prompt, I do that in Copilot. If it's like, hey, I can think a little bit deeply and be a little bit more careful about…I can use first names and I can use context to help me write. I'm sure you, like other people have preferential, certain tools work better for certain things than other. I'm going to use Claude if I want something more technical. I'm going to use, some people are Gemini versus ChatGPT, but I tend to prefer Chat. But it's like I'm going to be careful about where I'm putting things and then I'm looking at repetitive events in my day to say, how do I just reduce. One example would be I audit some of our deals before they go out the door just for risk.

 

(32:50):

And I have a QA assistant now where I'm like, "Hey, take my checklist, take our templates, take this deal, just help me spend my time." I'm still the human in the loop. Help me spend my time more effectively not hunting for these issues, but evaluating them once they've been found. Does that make sense?

 

Patrick McQuillan (33:08):

Makes perfect sense. I agree with the approach you're taking. And I would say, even to add to that, be sure that your assumptions are well communicated and consistent because that is huge. That is huge. One of the courses I teach is an intro to Gen AI for business for business leaders. And the first thing I tell them is assume that GenAI is a coworker. It's an intern who is very socially awkward and is probably very, very bad at getting the point, but it has read every single book and every library in the world. So, hyper-informed, but you really need to hold its hand to make sure it does this awesome work. So never assume it knows what's in your head. It only knows the words you type in. So usually what I will do is I will actually include on the top or bottom of the prompt, include X, Y, Z assumptions and actually walk through and I'll revise it, revise, but it could be boundaries.

 

(34:01):

It could be taking a certain weight consideration and a certain call on the dataset. It could be something as high level as like, bear in mind, this is a six-month plan and we're planning to iterate thereafter, whatever it may be, put in your assumptions. And more importantly than that, and the last thing I'll add on that front, give it a clear outcome. I think that is something that a lot of people forget. They'll focus on the task, but they will not focus on the broader series of tasks that are going to activate into this larger outcome. So the goal is if you skip that logic foundationally first, then you can always revisit that particular prompt chain and make sure that it is fully informed and it knows from the get- go what you're trying to achieve in a big picture. So each small task is environment explanation to serve that broader.

 

Jess Carter (34:48):

I am struggling, Pat, to wrap this up because I don't want to. So because I'm not kidding, I think we have talked about some of this stuff in the past on other episodes and it can feel a little bit heady if you're not in the right level of technical depth, but I think this is so accessible and it's so important that everyone understands both some of the really pragmatic stuff we just talked about and some of the larger strategic things that you mentioned earlier. Is there anything we haven't talked about that you're like, hey, we got to make sure we at least commercial about something. Is there anything we missed?

 

Patrick McQuillan (35:19):

As a parting thought, to definitely be aware of your surroundings if you are an AI products manager or you are investing in innovative AI. It is a competitive landscape, it is a regulated landscape, and even your engineers and your product managers, it's new to them as well. And if they've done a couple of POCs, that doesn't mean that they're experts. That means they've done a couple of POCs. So be very, very realistic with yourself. It's going to be a little frustrating because you're going to realize there might be more work and clarity and discovery to do. And I'm not saying put in 100 percent of your time, but I'm just saying take that extra half hour, get a pen and paper, the old fashioned way, that's how I do it, and just close out the world and map out what you need to consider what you're trying to do because at the end of the day, every company is chasing revenue, and while revenue is a good sign of demand, I personally prefer profitability because it takes into account everything that happens after.

 

(36:23):

And your goal is to maximize the profitability of your business, not get a million bucks on our $2 million project. So the goal is to really think about how do I sustainably manage our profit?

 

Jess Carter (36:37):

Wow. I don't think technical conversations give me chills normally, but that was important, Pat. And I think to your point, I've seen so many people when they first start using any kind of AI that they're aware of, like ChatGPT or some of these tools, they'll say, GenAI, they get frustrated because it's not exactly, "Well, did you give it the right prompt? Did you give it the right context?" So to your point, I think it's really funny to be like, just because you're using a pen and paper doesn't mean you're not also using AI, but you're being a little bit slower. And to your exact point earlier in the conversation, you're probably going to have a better outcome. Wow. This has been an absolute pleasure. Thank you so much for sharing your experience and your knowledge with us.

 

Patrick McQuillan (37:12):

I really appreciate that, Jess, and it's been an absolute pleasure to be here.

 

Jess Carter (37:16):

Hey, if people want to follow along and keep up with you and where you’re adjunct professoring next, how might they do that? What's the best way to stay in touch with you?

 

Patrick McQuillan (37:23):

I'd say you'd find me on LinkedIn.

 

Jess Carter (37:24):

Cool. Okay. We'll post a link to your LinkedIn in the show notes if that's okay.

 

Patrick McQuillan (37:28):

Perfect, please.

 

Jess Carter (37:29):

Awesome. Pat, thank you so much.

 

Patrick McQuillan (37:31):

Awesome. Thank you so much, Jess.

 

Jess Carter (37:32):

Thank you for listening. I'm your host, Jess Carter. Don't forget to follow the Data-Driven Leadership wherever you get your podcast and rate and review, letting us know how these data topics are transforming your business. We can't wait for you to join us on the next episode.

Insights delivered to your inbox