Data Driven Leadership

How the Cloud Quickly Unlocks Data Value

Guest: Roger Humecky, VP of Analytics, Texas Mutual Insurance

Anyone who’s undergone a data migration project knows they almost never get done as quickly as everyone hopes. Along the way, you may find out the process is more complex than expected, what you started to build wasn’t what you needed, or that you don’t have leadership buy-in. That doesn’t always have to be the case, though. Texas Mutual underwent a complex data migration project and completed it well ahead of schedule.

Listen On



Anyone who’s undergone a data migration project knows they almost never get done as quickly as everyone hopes. Along the way, you may find out the process is more complex than expected, what you started to build wasn’t what you needed, or that you don’t have leadership buy-in.

That doesn’t always have to be the case, though. Texas Mutual underwent a complex data migration project and completed it well ahead of schedule.

In this episode, you’ll hear first from Brian Vinson, client success leader at Resultant, as he unpacks the basics of cloud migration. Then later, you’ll hear from experts Michael Tantrum, data pipeline specialist at Resultant, and Roger Humecky, the VP of data analytics and data engineering at Texas Mutual, as they walk you through their cloud implementation journey. Join us if you’re considering transformation at an organizational level to become data-driven and want to know how to skip a few decades to get there.

In this episode, you will learn:

  • How to effectively get more out of less
  • What to think about when migrating data to the cloud
  • How to leverage the product in a meaningful way

In this podcast:

  • [01:34-02:36] Getting to know Brian
  • [02:36-05:15] Cloud migration 101
  • [05:15-07:55] Preparing a CEO for a cloud migration
  • [07:55-10:02] Benefits of transitioning to the cloud
  • [10:02-14:21] The cost of implementation
  • [14:21-17:10] Recommendations for leveraging the product
  • [17:10-19:28] Final advice from Brian
  • [19:28-23:08] Unpacking a cloud migration project with Texas Mutual
  • [23:08-25:02] Background on Texas Mutual
  • [25:02-29:13] The problem Roger wanted to solve
  • [29:13-36:30] Taking the time to find the right solution
  • [36:30-39:20] Integrating automation
  • [39:20-42:45] Striking the right balance of designing and executing
  • [42:45-47:04] When people understood the value of the migration
  • [47:04-49:55] How Texas Mutual is using their data
  • [49:55-1:02:41] Lessons learned

Our Guest

Roger Humecky

Roger Humecky

Follow On  |  

Roger Humecky is VP of analytics for Texas Mutual Insurance in Austin, TX, where he oversees data engineering, data visualization, and advanced analytics, including AI/machine learning. His previous experience includes establishing data and analytics teams at Comcast, Fidelity Investments, and IBM and helping turn data into actionable insights that impact multimillion-dollar decisions.


Jess Carter: The power of data is undeniable. And unharnessed, it's nothing but chaos.

Speaker 2: The amount of data was crazy.

Speaker 3: Can I trust it?

Speaker 4: You will waste money.

Speaker 5: Held together with duct tape.

Speaker 3: Doomed to failure.

Jess Carter: This season, we're solving problems in real-time to reveal the art of the possible. Making data your ally. Using it to lead with confidence and clarity. Helping communities and people thrive. This is Data Driven Leadership, a show by Resultant.

I'm your host, Jess Carter, and on this episode of Data Driven Leadership, we're going to hear and learn a little bit more about one company's lived experience implementing a cloud migration. There are assumptions about that process and what it actually looked and felt like. We'll kick off this episode with our Solution on the Spot segment where we bring in thought leaders and put them on the spot for solutions around data problems. Afterwards, we've got two experts who will take a real live example and break it down for us. Specifically, we're looking at transformation at an organizational level to be data-driven and how to skip a few decades in between to get there. To help me Solution on the Spot is Brian Vinson, client success leader at Resultant. Hey, Brian.

Brian Vinson: What's up?

Jess Carter: How are you?

Brian Vinson: Good. This is going to be fun.

Jess Carter: So I have labeled you a thought leader and we're going to talk about our scenario, a little teaser here, will be about cloud migrations. For people who don't know you, why would you be considered an expert in a cloud migration?

Brian Vinson: Okay, great question. So as a client success lead at Resultant, I work with a lot of different clients and help them with a lot of different problems that they have to find those solutions. And so the thing that's really hot right now with data is, especially with the pandemic is, "How do I have access to my data? We have all these crazy on-premises systems where we have these 12 separate systems in the cloud and I have to pull 12 reports and munge them all together to get the actual report that I want. How do we do that?" And so over the last six and a half years, we've done a ton. I've done a ton of cloud migrations, giving people access to their data in the cloud so they can access it from anywhere safely, securely.

Jess Carter: Awesome. When we say cloud migration, for people who maybe don't even know or they're not familiar with what that is, can you just explain a little bit of cloud migration 101?

Brian Vinson: Yeah, so there's cumulus, Nimbus. No, those are types of clouds that aren't... That's more weather related. So when we talk about the cloud... This is good because my mom always has asked me, "Hey, is that in the cloud?" And so having to talk to my mom about what the cloud is has helped me for this very moment. So what is the cloud?

So a lot of times organizations have a really big infrastructure team. They have their own data centers and they have their own servers where they have to keep people on staff to keep the server farms running. They have to keep things patched and they have to keep the maintenance up to date and they have to do the upgrades. And so you have to have this entire workforce that is specific to infrastructure, even if your organization is a baseball team or even if your organization is a hospital.

And so when you move data to the cloud, you're able to leverage infrastructure that Amazon or Microsoft or Google or Oracle have built specifically so that you don't have to worry about patches and maintenance and all that sort of stuff. It's hosted for you, they take care of the security, they pay people way more to handle security than you could pay on your own side of things. So leveraging these large clouds that have been built by Google and Amazon and Microsoft and Oracle really allow you to take off and allow you to hire the people that can do the analysis, instead of spending a ton of money on staff that manage your servers.

Jess Carter: Yeah. Okay. So that's probably a good place to greet our solution on the spot. So if I hit you with our scenario, you ready to play?

Brian Vinson: Certainly.

Jess Carter: Okay. So let's play off mom, which is always a great discussion over Thanksgiving or Christmas is, "What do you do for living in data? I don't know."

Brian Vinson: Yeah. Stuff with computers.

Jess Carter: Or dad, but yeah, stuff with computers. And so let's say that we have maybe a new CEO at a mid-level, mid enterprise-level client and they ask us to come in together and talk to them about, "My IT person is really passionate about the cloud, we don't have it, we don't use it. How important is this really to me? So maybe they're not really high data literate CEO, but they understand enough. They're excellent at running a business, they're they've never been the IT guy or gal. And we get this opportunity to walk in and help them evaluate, do they really need to take that step, and if so, how do we help them prepare for what that looks like, feels like, cost, et cetera? Does that sound okay?

Brian Vinson: Yeah, absolutely. If it's a client that we already know and take care of and have a relationship with, we've probably been doing work with them before and we don't need to do a strategic data assessment to understand all the different places their data result. If they aren't a current client, that's probably where we would start is, "What do you have? Because you can tell us what you have. But until we talk to HR and operations and finance and all the teams, delivery, marketing, sales, everyone, they're going to have a bunch of data that your IT people don't necessarily know about." I think the technical term for that is shadow IT.

So if we're working with someone that we already know really well, we would say, "Hey, let's start with the pilot. Let's take a subject area or a set of reports that you run all the time. We will help you move the data for that to the cloud. We'll [inaudible 00:06:12] your connections to those data sources for your reports or your visual analytics, like Tableau or Power BI. We'll help you move that to point at this new cloud solution. And so we'll do a pilot, it takes six to nine weeks and we will show you that to build a data warehouse or to migrate to the cloud, it doesn't have to take a year, you don't have to boil the ocean. We will take an iterative approach and start small and build from there. And after nine weeks, you'll have one set ready to go, your infrastructure in the cloud will be ready to go, and then we'll just iterate on that. And so in six week increments you can have a lot of stuff together really, really fast."

Jess Carter: Well, and let me ask you this, too. A lot of the elements of getting to the cloud do really... Or the approach we take does seem to depend on what they do have. And so if you're using QuickBooks online, that's a different scenario than if you're using something on an app on your infrastructure. Right? And so that assessment, if we don't already have it, is pretty pivotal before we can go tell you what the rest of the experience may look or feel like. Is that fair?

Brian Vinson: Yeah, absolutely. Because like you said, you might already use something like QuickBooks online or other web applications that are already in the cloud and so you don't need to bring a data down just to send it back up. So there are other ways we can get to that data and pull it into your virtual private network that's up on the cloud. So one of the other huge benefits that I didn't talk about yet, the cloud is infinitely scalable. So if you have on-prem or on-premises servers, whatever you bought is what you have. And so if you need to upgrade those, if you need to add more RAM or more compute or whatever, you got to upgrade that machine. And it's a capital expense, so if you're in the budget side of things, do you want all of this to go against your capital expenses, or would you like to do op expenses? And so being in the cloud moves you to operational expenses and allows you to scale things up without having to invest in more hardware also. That's the other really huge benefit of the cloud.

Jess Carter: That is exactly where I was headed next, is there's this... It's a change in paradigms where you're used to maybe buying new hardware or your IT person comes to you every five to seven years and says, "Oh, we need new stuff, it's going out of warranty," whatever. And if you make the switch, there's more about ramp up, ramp down, the amount of consumption in the cloud is what you're paying. So that those subscription fees may change month over month depending on how much capacity you're using in the cloud, but that's very different than every five to seven years you got to go spend a whole chunk of change on a new hardware. Is that fair?

Brian Vinson: Yes. Yeah. And one of the other benefits of that is it then opens you up to all of these possibilities of other tools that you can use as a monthly subscription, instead of having to pay an annual subscription and have this license. If you like it, keep using it, if you don't like it, turn it off. You don't have to worry about these large expenses anymore.

Jess Carter: So maybe the CEO is excited at the idea of pivoting to subscription or capacity-based consumption costs, but is there a big giant check on the front end of that to do the implementation? What's it cost to get them there and do they need a big check up front or some cash up front to do that?

Brian Vinson: Yeah, that's a great question. So the way that it usually works is the tools that we use in the cloud for extracting and loading data or transforming data, it's usually around the extract and load that you'll see, or the ingress and egress is the technical term for it, where you start to see a lot of charges. But there are tools that we use, like Snowflake. It's the best in class data warehouse in the cloud that they don't care how much data you push in or pull out. They don't charge that way. They charge based on how much you store, which at this time in December of 2022 is $20 a month per terabyte. But then they do charge, like you were talking about, for the compute.

And so if you have a process that runs and it's going to run for five minutes, well, if you were to add more compute to that, you could get that five minute process to run maybe in 20 seconds. And so it's scalable, like we were talking about, but that also boils down to the number of credits and the amount of money you're going to spend. It might cost you the same amount to do it in five minutes as it costs to do it in 20 seconds, but you get it way faster.

Jess Carter: Well, and you'll have to interject here with more lived experience than me, but what I've noticed when I've seen clients go through this pivot to cloud migrations is their behavior changes. And it shifts from, "How do we support our infrastructure that's on-prem or maybe in one data center?" To, "How do we leverage our data more effectively or leverage our cloud consumption more effectively? How do we get more out of less?"

And so there's efficiencies of scale I've observed, where it doesn't mean you're walking a bunch of people that were running your infrastructure out the door, but could we leverage them in ways to build out some more data analytics or to better understand our infrastructure or to assess what we really need to be paying for consumption costs around. And then, "Hey, as we future roadmap, what our needs are as a business, how do we leverage our cloud infrastructure to spin something up and try it?" And we can get more agile to say, "Let's spin up a six-week iteration where we have a hypothesis that if we do X it'll yield Y, and if it doesn't work, we just turn it all off and the consumption goes down and we..." It's just easier to play with our business, observationally, what I've noticed. That sounds right to you?

Brian Vinson: Yeah, you're absolutely right. And there is a double-edged sword when it does come to the cloud. And so watching those costs and managing those costs, all the tools give you a way to limit you hurting yourself. Accidentally having something that continues to run and you get a $20,000 bill, that would probably be no point. But being able to add limits and that sort of stuff and keep an eye on it. But the cloud also offers you the opportunity to leverage those resources better.

And so one of the things that you'll find is the more near real-time your data is or that you want your data, the more that's going to cost you. And so with the cloud, you can do that. You can stream data through and you can do all these things, but you don't have to, and you can still use the cloud. And so we have lots of clients that love the cloud because they have one place to go get their data from, one source to go get their data for all their business users. But that doesn't mean that necessarily they need that to be up-to-date all the time. So if you're a Tableau user for example, maybe you just deliver that extract, it's refreshed once a day instead of every 15 minutes or a live connection. So you're still able to leverage the cloud for what it is, but you're also helping monitor and manage any consumption costs. So there are ways to use the cloud without a cost at all also.

Jess Carter: Awesome. Okay. So then let me ask you this. One of the painful things I've experienced though, is if a client isn't prepared organizationally for after we've gotten to the cloud migration, how to leverage the product in a meaningful way. So to me, it's like a foundation, and we've built a new foundation they can build on and they can do a bunch of fancy stuff with it, but do you have any recommendations for clients if you're like, "If you're thinking about taking on a cloud migration, organizationally, here are some recommendations from subject-matter expert, Brian"?

Brian Vinson: Yeah. So again, when we know the client and we've worked with the client, we can help them with the entire solution. If we don't know the client and we've gone through a strategic data assessment with them, like I talked about earlier, one of the things we talked about is the maturity model of your data analytics.

And so part of that is, "What are you going to do with this data as you move along the path in this cloud migration? Do you need to be in a spot where you can do advanced analytics and data science? Is that something that you're interested in doing? Do you have data governance set up? Do you know what metrics are important to you and how they're defined? We can help you set that up. Do you need help with organizational change management? Is this going to be a big deal for your entire organization? We can help you with that."

And so when we say cloud migration, yes there is a thing that we're doing, but we like to look at things holistically as an organization and help our other teams with all the other pieces instead of just the one thing that's on the docket. So we like to think about things as a whole.

Jess Carter: Awesome. And what you didn't say so far today, and I want the opportunity to clarify this because I have seen a lot of clients misunderstand this, we are not saying that the cloud is always cheaper than on-prem. We are saying there are different ways to look at the budget, there are different ways that you pay for it, different values, but I've had a lot of clients who just sort of assume that it's cheaper. And I think we're not saying every single time depending... It depends on how you manage it and how you use it. Is that fair?

Brian Vinson: Absolutely. Yeah, that's absolutely fair. So yeah, it's not necessarily cheaper .and more than likely, depending on what side of the budget it comes from, it may look like it's costing you more, but the efficiencies and effectiveness that you'll gain from moving to the cloud, I think that you would see a great return on investment. And we'd love to talk to you about it and see what kind of return on investment we can get for you. Yeah, we want the data to make sense, and we want the solutions that we have to have really good outcomes.

Jess Carter: Yeah. Is there anything I haven't mentioned or asked that you would make sure you also mentioned to this CEO before we walk out the door?

Brian Vinson: We like to look at things holistically and we like to be consultative. We are different than the big four. We are a little more boutique and we really do like to be relational with the way we do consulting. We don't want to take your watch and tell you what time it is. We want to be invested with you and help you along the way because my favorite thing to do is for a board to go back to a CEO and go, "Yeah, you guys made the right call by doing this project." Or for the VPs that I work with to go, "Hey, I got the SVP role because you guys helped me do this." We love it. We love to make our clients the heroes and would love to help anyone with their cloud solution.

Jess Carter: Awesome. Well, your passion is coming through. And so I think you did a great job with the CEO, I think they want to buy and talk more about a cloud or maybe evaluate their value props for it. So nice job on Solution on the Spot, Brian. I appreciate you joining me for it today.

Brian Vinson: Thank you very much. It was a pleasure.

Jess Carter: Yeah. And I think as you guys listen to the next segment where we really unpack a scenario about a cloud migration, I think what's really important about the themes Brian laid out, is he really set the stage for that what, the why, the value propositions or potential value propositions. And then how to understand how to take on a cloud migration that is holistic and is considering the whole business and what they need. Shadow IT was mentioned. That's a big deal. Usually when we come in and do an assessment, we uncover that there's more technology solutions at use for a company to be successful than they ever understood or had on paper.

And so I think as you hear this story and you look for relevant details, you'll relate to the business problems, you'll relate to some of the challenges, but I think the lived experience of how to get through a cloud migration, some things that surprised and delighted. And then also that the story isn't over. I think that this client is set up for success in the future because of the work that they've invested in themselves and now they're going to see endless possibilities on exactly what they can go yield based on their businesses need because of that foundation. So we'll go ahead and listen to that here now. The two experts you'll hear from are Michael Tantrum, Data Pipeline Specialist at Resultant, and Roger Humecky, the VP of Data Analytics and Data Engineering at Texas Mutual.

So we're going to get started and learn a little bit more about Texas Mutual. And for that story, we've invited Michael Tantrum and Roger Humecky. So one of the first things I want to ask you guys, and I'll start with Roger is, hey, tell me a little bit about you. How long have you been at Texas Mutual? What's been your job there, your career path? And also a little bit of how did you and Michael meet?

Roger Humecky: So I started at Texas Mutual four years ago and I was the second employee in our data office. And to the company's credit, they recognized that they had a real opportunity to modernize their data stack, and they hired a CDO, who was Michael Hernandez, and I was his first hire. So the two of us put together an organization over the last four years and went through a lot of the modernization steps, and we're going to talk about our journey today. But that's really been what it's been about, is building a data organization and then delivering on some of the value that we had blocked in our warehouse and other places.

Jess Carter: Very good. And how did you and Michael meet?

Roger Humecky: So about a year into the journey, well actually, we started participating in the Tableau user group within Austin and there we met PK, who's a prominent member of the Teknion community, and she introduced us to Teknion. And then by the time we were ready to do some stuff, Michael was our contact. So we started working with him and getting some ideas about where we would go if we were going to take the first step. We actually talked to a number of consultants, including one of the big four, and had them in here and did a whole big review. And didn't really actually get a lot out of that, despite having 12 people on staff for three months from that, we ended up with just a reference architecture that we came into the engagement with. But we found that in an afternoon with Michael on a whiteboard, we actually accomplished more than we did through that whole engagement.

And so it was pretty clear that the right partner for us is someone that's got a point of view, understands what the current best tools are and a workable set. I'm sure there's many solutions that could work, but what we really liked about Michael is they had a stack that they recommended and that they had experience delivering for a number of customers. And they just knew exactly how to do it and how to remain focused and not get lost in all the millions of possibilities of edge use cases and things that we didn't even know if we would ever need and didn't really need to map out our roadmap just now.

We liked the approach that Michael had, was very hands-on. He is very credible, too, because unlike most salespeople that we talked to, he actually could speak from real experience and relate to us on what the actual challenges were and how to actually solve them. So I like that he's actually done the job and he's not just giving us marketing talking points. So you could tell that he actually appreciated where he was coming from and could relate to us and our problems and provide realistic solutions.

Jess Carter: Now that we've maybe understood a little bit about how long you've been there and what you guys were doing, can you back up and give us a little bit of the play-by-play? What I would like to understand, and Michael, your take would be really interesting, too, is what problem were you trying to solve? And of what was the context around that problem four years ago?

Roger Humecky: Yeah. So let me just back up a little bit. So Texas Mutual, we're workers' comp insurance in the state of Texas. We started in the '90s and that's when most... Well, back in the late '80s, workers' comp in the United States was really messed up and most insurers were actually pulling out of it. And so it left this big gap across the whole country. And so different states handled it in different ways. Texas and many other states formed their own workers' comp program because they literally couldn't get anyone else to provide insurance. A lot of the laws got changed nationally and in state level to make it viable for companies to provide workers' comp. So there's a lot of competition now, but in the early '90s there was a real problem.

So Texas created Texas Mutual, they spun us off when it became clear that the market had stabilized. And so we're somewhat independent of the state, though they still appoint some of our board members. So because they appoint our board members, other states don't want us to do business in their state, that they feel like we're tied to Texas. So that limits us to only doing business in Texas. But that's actually a good thing because Texas is a great state, it's growing a lot, and it keeps us laser focused on just doing worker's comp really well. So even though it's competitive, we still have 45% of the market share. And we basically service most of the small and medium-sized businesses in Texas. And that's our focus, is Texas-based businesses.

But as you can imagine, if we start in the '90s, we have a foundation of mainframe-type systems, both from a application and a database standpoint that were built in the mid-to-late-90s and their rule staff. And back then, people weren't thinking about how to architect a database. It was just by report-driven or use case-driven, where someone needed something. So they would create a data table, and then a year or two year, 10 years later, someone would build derivatives off of that thing. And you can imagine, over time, we had just this massive stuff with lots of dependencies on dependencies. Not built in an architected way, but more just a rambling house that had no design to it.

And as a result it was hard to maintain, it was fragile, it was hard to build anything on. The polling season we didn't even care about that would break things downstream. And it was like we were afraid to update it because it was like defusing a bomb, where [inaudible 00:25:48], you're like, "Well if you change this thing way up here, what would be the downstream ramifications on all the other stuff?" Because it wasn't mapped out anywhere and the dependencies were hidden. So there was logic in different ETLs, there was logic in different stages of reports and just logic all over the place, and it was hard to find it and trace it. So it was very unattainable, it was hard for the business to get what they needed because of that. So just to add a new column or something was a big project or change some calculation, because again, you didn't know what else you might break as a result. So it became clearer that it was untenable.

And what also helped was the emergence of a lot of FinTech and InsurTech type startups that were starting to use AI and data to challenge the industry. It just really hadn't been challenged since the '80s from a technology standpoint. So that created a need and desire to modernize things, which is why the company saw the value in creating a data office and investing in our data products. So that's what got us on the journey where we are. And it was to solve this need of just unlocking this value. Because we're the largest provider in Texas, we have the most data, and therefore should be able to leverage that data to get the best outcomes. And to a large extent, we do and we have been able to leverage it, but there's clearly opportunities to do it more efficiently, to make data more prevalent throughout our decision-making processes, and just help inform things to make it prescriptive, descriptive, all the different types of analytics to help solve business problems.

Michael Tantrum: And I think one of the key things to think about as well is the timeliness of it. So there was a huge challenge for business users saying, "I want new analyses, I want new data." And the turnaround time because of the legacy applications was just too slow. And business needs to be able to report at the speed of business, make decisions in that way. And that was a key driver for Roger and tame to turn that around.

Roger Humecky: And also, just the time of updating in the day. We didn't get data out till noon. And as we all know, data's a perishable good, so yesterday's data was hard for people to make decisions off of if they weren't getting it until the afternoon. We know people love to have data in their inbox before they even wake up so that they can be consuming it at breakfast or whenever. And as we've transitioned to that, we certainly see that that's the case. People now expect it before they get up so they can just look at on their phone and see. Not every day, not all the time, but when you're wanting to know, "Did we close a certain policy?" Or, "Did this thing get resolved or that?" It's nice to just be able to pull it up real quick and check on those types of things.

Jess Carter: Yeah, I think I understand the problem and it sounds big and hairy. And so as you start to unpack some of that, it sounds like you did invite some other voices into the conversation to help you. And I guess what I'm curious about too is, so having done some of similar work before, where that seems to sometimes land flat is not that other people aren't capable, but it's a lot of times what it seems like we want is an actionable directive of, "Here's how you go get from where you are today to where you want to be." And a lot of times there's a 300-page book of recommendations someone might give you, but it's not guidance on, "Hey, do you understand that this is the path or the journey we could be on? And let's start here, and here's a quick win." So I'm imagining that that's maybe a moment you had on the whiteboard with Michael, is that right?

Roger Humecky: Yeah, exactly. There's a million decisions to make and it's a really big project. Many companies get halfway into these things and then they get stuck with a mess of old systems and partially new systems and nothing that actually achieves what people wanted to do. So we wanted to wade into this carefully. We don't want to be one of those stories that gets stuck along the way. So we actually spent the first year just dabbling with other things to provide the business some value, like rolling out Tableau, that we didn't want to go super big with that 'cause we don't want to build a ton of dependencies on a legacy system we intend to get rid of. There were some AI projects that the business was dying to get to, so we did a couple of those, but again, not huge.

But it gave us some learning about how things were, what was really valuable to our business, and then it was helpful in just framing over overall discussion. Because really, at the end of the day, we're a service provider, we're not trying to build a technology for the sake of it. It's to solve real business needs and provide value to the people who use it within the organization. So it was helpful to just take a year or so and really understand what they want to do and where the vision is and what the culture of the company is, and how to get things done and that kind of stuff.
And then when we were ready, we started carefully evaluating different solutions. We talked to Gartner analysts, usually a dozen of them. We talked to different consultants, we talked to different point solution people. We just wanted to understand what all the ways to do stuff are. Once we had a solid understanding of what we wanted to achieve and why we were doing it, and that helped us immensely in choosing the right technology stack.

And then from there again a huge project just to figure out how to get started. So our big challenge was picking the right projects that we could do a proof of concept and touched out some of our ideas. Some of which worked fine and other things we pivoted on and found other ways to do it so that we could accomplish this giant thing in a reasonable time period. Because I've also seen these projects get stuck or fail if they take too long, 'cause there's only so long that people can be patient and these kind of transitional projects. They want new stuff, they want stuff to work. And so we need to find a good balance of making steady progress, giving people enough value along the way that we can drive adoption and keep everyone bought into the project but also completed in a reasonable timeframe.

Michael Tantrum: Yeah. And it's an interesting problem, getting to the starting line. People tend to approach these projects in two ways. They approach it top-down where lots of pictures and PowerPoint diagrams and architectures and reference architectures. The other way is bottom-up, where my developers just want to start coding. And both left alone either into lend to paralysis, "I can't get started," or, "I'm starting chaos from the beginning." And I think the trick that Roger and team managed to say was, "How do we get to the start line in a steady, governed manner and still produce standardized, good quality production-ready code in a timely manner, that wasn't caught in paralysis or in chaos?"

Roger Humecky: Yeah, it's a tough balance, and not everybody gets it perfect. But you're exactly right, our approach was to set out the objectives that we wanted to achieve, for business-based objectives. And then just start plugging away its stuff and breaking it down into manageable segments where we focus on certain parts of the data, certain parts of the business. And achieve those things and then launch them and demonstrate the value in it and then repeat it for all the other stuff. So there's no way to actually scope out every little thing. We had to traverse hundreds of different ETLs and logic and build thousands of tables. But just by creating a reusable process was really important to us.

We saw this as a once in a lifetime opportunity almost, where to be able to redo our entire data warehouse and entire technology stack is unusual, really. Because usually you're saddled by something that you got to just retain for legacy purposes or some integration or something. We did have the opportunity here to redesign everything, and so we wanted to do it in a thoughtful way so that we don't create a mess. So we did spend some time figuring out, "How can we make our logic, our business logic all in one place, easy to document and understand, and ultimately easy to maintain so that we can achieve one of our key objective, which is just being able to turn around changes in the future fast? Things that might have taken six months in the past, we should be able to do the same day." And so that's what we wanted to achieve, those kind of outcomes.

We want to be able to update our data well before people wake up in the morning. We want to be able to have data quality checks around everything. So that's too big a project to do after the fact, but as you're going along and building stuff, just building an automated test at the same time that you launched every single table or artifact, and then just putting in that production, having it run forever, it's been really valuable to us. And now we never worry about quality of our data or that our users are going to ping us and tell us and something's off because we know about it well in advance. And honestly, because there's so much checking there, it rarely even flags those things because it just becomes inherent in the quality.

Michael Tantrum: So that would be one of the themes, wouldn't it, of firstly your technology choice but also your approach for a modern data architecture, which is what you were running for, was the theme of automation? The idea that both in development and design, in data quality, data testing, documentation, even deployment, having all of those steps automated because it's repeatable and it's standardized. And I think you make a really good point and that most people forget about this, is that building your first data warehouse is not hard, it's the enhancements. When a user comes back to you next year and says, "Hey, what you built for me last year was awesome, can you add?" And in a traditional world people go, "Whew, I really hope that developers still here." But with the automation that kind of mitigates that risk and makes that much simpler.

Roger Humecky: Yeah, exactly. Yeah, we've fought hard to make it as simple as we can. Obviously, there's going to be complexity in certain things, but to make it as easy as possible to maintain and to bring on new employees and have them understand and be able to get up to speed and modify someone else's work years down the line. And we did invest in a data catalog and other things, too, that helped with that. Ultimately, it comes down to a lot of architectural design and decisions around the types of tools and types of solutions that you build. There's certainly fun things that developers like to build, but we've had to resist some of those because they would be harder to maintain. And ultimately, there's no prize for having the prettiest or most complicated data warehouse. The only prize is for having a data warehouse that works and that you can maintain.

Jess Carter: That's huge.

Roger Humecky: Yeah, so it was really beneficial whether there's other modeling choices that are, you could do this or you could do that. And we want to make sure that we're doing what's easiest for our users to adopt and what's easier for us to maintain in the long-term.

Jess Carter: It's hard. So for me at least, when we're making some of those kinds of decisions, I don't know if you guys would agree, but it's difficult because when you walk away from something that felt constrained in a whole bunch of different ways and then you walk into this whole new world, and it's so different. It's my Ford Contour, 1994 Ford contour, to buying a car in 2022, when I'm like, "Oh, this is fun." And so there's everything's new and nothing is... It's really standard. You can design the solution you want, which is fun, but you have to narrate yourselves out of the wrong conversations for months. You have to pick the right Legos for the right solution you need.

And to your point, it's not always the sexiest stuff. It is sometimes, what do you need enough to drive business decisions? But it is a little messier. Right? Than you bought something before and you put up with it and you built views, on views, on views or analysis on top of analysis, and now you're like, "Well, we can do whatever we... There's people on the moon, we can design whatever we want." It sounds like you struck the right balance of designing and executing. You did a good job with some of the pilots, you did a good job with some of the projects, you let those lead you through this whole new world. But are there other things that you did? Did you play with your old environment and your new environment in parallel to help people see the differences? How did you decide on, I don't know, how to narrate that design and delivery?

Roger Humecky: Yeah, we certainly had clear objectives on what we want to achieve in the new environment and it's a stark difference from the performance of the old environment, so we do run stuff in parallels. We're making the transition just to demonstrate to people that we can produce the same numbers. Or in many cases, we've found that there were logical things that we could improve so we could at least explain the differences and why they're better in the new environment. So we did a lot of that, but mostly it's about performance testing. Our old environment had the right data, it was just hard to get to, inefficient to produce, those types of things.

And so our main objective was just really structuring it in a more simple way, a lot less tables, a lot more standard naming, a lot easier to maintain logic that we can explain, full lineage, that we can show. That was something we've never had transparency to before. It was a whole project to go try to trace where a column came from. It would take someone, it could take them a week or two to go trace down the ultimate source of a column before. So just having those kind of capabilities is start contrast to what we had before. And be able to produce the data in a couple hours instead of a 10-hour or a 12-hour process. That if we started an hour or two late and it started going into the day and competing with resources, we lost the whole day. So we don't have any of those issues now and so that's a huge benefit to our users and to our community.

So yeah, the comparison is very clear to what we have now versus what we had before, but it's always going to be a journey, too, where we're continuing to evolve. We say we're on a modern technology stack now, but I can see that most of the tools we're on will probably change over the next five years to other things. There's just so much innovation going on in the space. So part of what we wanted to do is knowing that that's going to be the case, is to have tools and designs that we think we can easily migrate to something else when a better solution comes out. We're not dissatisfied with the things we have now, but we can see that the likelihood of innovation coming in places that might not even exist today, it's pretty high.

Michael Tantrum: And the thing is that if you evolve slowly, if you make these changes slowly, you don't have to have these traumatic events every 10 or 15 years, you-

Roger Humecky: And they're hard to have, realistically. The only way you have this kind of transition that we've gone through is you got to accept a lot of pain as a business.

Jess Carter: So you agreed on this path, you started down it, you did the pilots, you did the projects. Help me understand, I'm sure there's a moment of nostalgia for you where the business started to really get it you either a specific project or a specific scenario where you showed him the data side by side, but could you tell me a story about a moment where you knew sitting in some room that somebody was really starting to understand the value of what you were doing?

Roger Humecky: Yeah, so I think we did hit this critical point where we'd gone through the pilots and maybe the first couple of test things, but then we started hitting a stride where we were just updating data, migrating and enhancing the reports that people used to have. And it was starting to go really quickly and we could see all of a sudden the adoption. We hit this critical mass of new stuff where people were really bought into it and started really adopting it and using it in a big way. So our usage on Tableau was going way, way up and we were getting a lot of questions about the reports and requests to enhance them and you're just seeing we're getting a lot of engagement. So there was definitely a point where you could tell that things were going well and that people were on the training now. Where it's not that we always had a lot of good support.

The good thing about company and this project is everyone recognized the value and the potential where we're going, but at the end of the day it is only about if you get adopted or not. If we build something and no one uses it, then that's a failure in my opinion. And so it was really great to see the moment where we really saw that people were starting to move over. And then that really reached a mass where even our key users started just moving off of the old stuff before we had to prod them to.

So I imagined that we were going to have to go hunt down some people and kind of kick them, but the good news is there was enough value in the new environment that people voluntarily switched over, and that's really what we want. We don't want to be in this place where we're having to force people to adopt a new thing, I've been in that boat before. And if that's the case, it probably means the new thing doesn't do everything that the old thing did. So I was really happy to see that users adopted it and we were able to actually retire our old thing almost a year ahead of schedule because of that.

Michael Tantrum: Roger is probably not great at blowing his own trumpet, but the way that they got users to adopt the new solution is non-trivial. And a lot of organizations, that's where your projects fail. And so being able to demonstrate to your users, not only have you got yesterday's data, but it's also met all of the data quality checks that we've mutually agreed. It gives them that confidence to say, "Yeah, okay, I can let go of my crusty old spreadsheet that was my comfort blanket and embrace the new." And too often, I think, our technology projects, we focus on the technology and we forget about the human aspect of that. But to Roger's point, if your users aren't using it, if they aren't adopting it, you might as well go home.

Jess Carter: Well, and I'm glad you emphasize that, Michael, because for me, what gets exciting even hearing this story is when you first started talking about where you were four years ago, it was just so heavy. It was interesting to me to hear the transition. It was so heavy about how long it took to get something out of the warehouse, how long it took to verify where something came from. People were spending all of their time validating the warehouse or trying to get something in it.

And when you tell the story about where you guys are now, so much of this is about engagement, curiosity, further deep diving and understanding the data. It's not so much, "Can I get something in?" Or, "Can I get something out of it?" It's, "Whoa, what does that mean?" And that's such a different position for you guys to be in your marketplace, back to your point about some of your competitors. And so one of my questions too here is, can you help me understand, if you look now at the last 12 months, how can you look around the org and tell that Texas Mutual is using the data differently, that the value prop is also, to Michael's point, the users, that they're using the data differently? Don't give away any secrets, but it'd be cool to hear some of that.

Roger Humecky: Yeah, so our main focus for the last couple years was just trying to just migrate this stuff that was essential. We actually held the line a little bit on the new enhancement requests. We are getting to those now and that's really exciting, but we weren't really trying to drive using the data differently, though we did do some of that. So our focus was primarily around, you have essential things that you depended on the old warehouse for, we need to provide those things before we can move on to the fun stuff. So we got the organization aligned around them. To their credit, they bought into that too. And they built their wishlist on the side, but they let us focus on most critical things.

Now, where we could, we added little stuff here or there. We certainly converted things from old tabular-type reports to visualizations where we could. And so there was certainly some value that we added along the way and people definitely are engaging more with the data than they did before, but they're also seeing what the potential is. So we have a great list now that we're starting to chip away at, of all this other stuff that they see as potential. And I'm looking through them, they're awesome stuff. I can see it's going to really impact the way that we write policies more efficiently, we process claims more effectively, including giving better outcomes to injured workers and things.

So it's all the right stuff that we're focused on and it's going to be so much easier to develop and design and maintain and roll out this stuff with a nice clean platform, which is why we prioritize just getting to the cloud, as opposed to adding more dependencies to legacy stuff. But there's a huge pent up demand and we've got a giant prioritized list. And because of the investments that we've made, it is going to be way easier to roll this stuff out than it ever would've been before.

Michael Tantrum: And you're the victim of your own success.

Roger Humecky: It's nice to be able to shift our resources from migrating and modernizing to truly developing incremental new value to the organization.

Jess Carter: That's awesome. So let me ask this, now that we're in this place, it seems like we've caught you in a really cool, pivotal, interesting time. We were building the credibility, building the trust, making the transitions, we're kind of there. Now we're starting to look ahead at all of the value props ahead of us. As you approach that, to Michael's point, new challenge, new opportunity, what are the lessons, what are some things you learned from this that you think you'll pull forward for that exercise? So I certainly heard the prioritization is key, but what else? What do you think is part of the story that you've gained and you'll leverage in the future?

Roger Humecky: Well, we've certainly gained a lot of experience with the data, how to organize it, to build data assets that can meet multiple needs and simplify things. So I think because of that approach, we're not going to actually have to build a ton of stuff in the future to meet the different data needs we have. So we can actually shift a lot of resources away from data engineering and more towards delivering solutions to the business. But we're also anticipating, and what we're seeing is a changing world here where all the sources where we gather data from are moving into the cloud and those cloud providers are actually more difficult to get data out of than what it used to be on premise, when we could just tap into the backend of the application databases.

Now, in the cloud world, everybody's creating their own data output format, so we can't use standard tools like HVR or something to stream data out of a standardized database. Now we've got to build more complicated interactions and be able to process that stuff seamlessly. I don't know how we would've done that, honestly, if we weren't already moving our data platform into the cloud, but we're going to have to deal with some of those challenges. But also because of where we're at with the data that we have, it's giving us the opportunity that we can really focus on it in business use cases and capabilities, as opposed to the backend-type processing stuff.

Jess Carter: Awesome. Very good. Michael, what about you? Do you look at this? I can tell you're always a lifelong learner. What do you pull out of this story where you're like, "Man, and the next time I walk in with a whiteboard and a Roger, here's what I'm going to pull forward"?

Michael Tantrum: Yeah. I think the trick was not to try and bite off too much in one go. And so these guys brought in really tightly to the agile process. So doing lots of short two-week pieces. There's a continuous delivery of new components. I think the other key thing that was really key for these guys was that data governance was something they made part of the DNA of the project from the beginning, not a bolt-on and not an add-on. And so what that meant was, from the beginning, there was concern about standards, there was concern about definitions, there was concern about, "How do we record decisions about data definitions? How do we track lineage? We care about documentation, we care about data quality."

And all too often, these are left to the end of a project. And just watching these guys do that throughout the project. And so every time a data engineer went to pick up a piece of the next item to work on, they were thinking about these wider things. And because it was baked into the process, it wasn't an onerous burden, it was just the way we developed. And so that this ability to turn around items rapidly and yet still service data governance components as well, was a key learning I think for me, watching these guys. And as he says, he comes in a year ahead of time, two years instead of three. How many data warehouse projects do you hear of in the marketplace that even come in on time these days? So that's a huge win for them.

Jess Carter: Yeah, absolutely. I think it's a story I hope you feel very, very proud of. Because you're right, the things that scare me and keep me up at night is anyone whoever says, "We'll just get to the reports at the end." And you're like, "Ah." If you had workers' comp, you'd probably hang out with your department of workforce in some respect, there's probably some federal reports or state reports, and those are not the end. Those are requirements that you gather at the very beginning. How do you avoid as much pain as possible in this process? It sounds like you guys were an incredible team working together to figure that out. So I hope you guys are proud. This is a pretty unbelievable story. And to Michael's question, I have never heard of a data warehouse project going live 30% earlier than expected. That's incredible.

Roger Humecky: Yeah. No, things work really, well, we have a great organization, one that helped us stay focused, but have great employees. We were able to bring on Teknion and staff augmentation to help us move a little faster in certain areas where we saw we needed some help around just getting dashboards created faster or some data engineering. So we had some ongoing support from them just to help us. And so it was amazing though, to be able to do that. I wouldn't have put high odds on it at the beginning because there's so much unknowns and so much complexity that we do know about. And even when we got started, making those investments and the data quality and the design aspect. We worked on what would be a great pace, because you have to set all that stuff up the first time. So the first nine months we didn't actually roll out a lot, but layed the right framework. So the year after that, it went really fast.

Jess Carter: Well, and you guys buying the time that you did, so I'm going to put a couple asterisks on some things you've said to today because I think it's really important, the organization's support around you is so much more important than I think most people understand. Their willingness to say, "Hey, yes, we'll give you X number of weeks, months, sprints", whatever. "And also, let's agree in advance of what we're going to get out of it. Help us understand what we can expect and let's just agree to that early and often and then we'll defend and we'll stay committed."

And I think it's very easy for organizations, especially after 2020, so highly reactive, the economies all over the place, things are changing rapidly all the time. It's very easy to get distracted by short-term things that grab your attention. And so it sounds like Texas Mutual has a pretty incredible leadership team that's helping chart some of the path. And you've got incredible delivery people who are making sure that they're honoring the commitments that have been made. And so back to Michael's point on the people piece, it's so much more indicative. I would suspect that's a significant reason why you could get done so fast. Right?

Roger Humecky: Yeah, exactly. Right? Yeah, it's so easy to get distracted in a million different ways from, either changing business priorities or on different technology stuff, there's so much that you could start exploring or look down, but at the end of the day, we need to deliver stuff. And so we certainly benefited from an organization that was aligned to that and supportive of that. And then we stayed committed to that, too, internally, to make sure that we are delivering stuff. As Michael said, delivering value consistently over short time periods so that we can actually get it in front of people and get feedback and test it and then repeat that. If you're not focused on driving value to the people who are going to use the data, then you're looking on the wrong stuff. And honestly, I think that is the biggest problem for projects, is you can have all the resource in the world and the smartest people in the world, but if they work on the wrong thing, then you've wasted them.

Jess Carter: Yeah. Absolutely.

Michael Tantrum: I think there's an element of pragmatism as well, in that you know, don't let the perfect become the enemy of the good. At some point you have to get out of the lab and get into the field and you have to start building, and I think there's a right balance. "How much research do we do? How much analysis do we do? When do we say go? When do we pick a technology or a set of technologies?" Because nowadays, it's a set. You pick a of technologies and you say, "Right, we're going to run with this. And even though in 12 months time and two years time, we know things are going to change pragmatically, we have to do something."

Roger Humecky: Nothing gives you more information than actually trying something out and seeing what works and seeing what doesn't work. And have an open conversation with people and setting that expectation up front that we might abandon everything we just did in the sake of learning, and giving people the freedom to take risks and try stuff, but also make sure that everyone's okay. We're not necessarily married to anything and it's okay if we start something and decide that's not the best approach and we throw it all away and start over again.

Jess Carter: Very good. Okay. Awesome. Well, guys, thank you for sharing your story today. This is unbelievable learnings. This is the kind of story that I think can help people avoid months, maybe years of pain, and so it's generous of you and we're thank thankful that you're willing to share it.

Roger Humecky: All right. Thanks.

Jess Carter: As we wrap up this episode of Data Driven Leadership, I wanted to call special attention to a couple of themes we heard from Brian, from Michael, and from Roger. The organization's willingness to acknowledge that what they're in the business of isn't a data transformation, and their support of IT to go get a data transformation and a system transformation right so that they can then yield better services and data from their IT team. It's a really insightful approach to this work.

If it looks like you're taking on a system modernization from the point of view of an IT department and its needs and not from the point of view of the business and its needs, you may be a bit backwards and want to reconsider. And if you aren't starting from what the business's value proposition is, it may be time to go back and listen to that beginning solution on the spot with Brian where he talks about how important it is to think about the opportunities that you'll have when you have the right foundation for your systems, if it's in the cloud, and how you can build on those, how you can invest in those. And if you decide to turn left or right in a month, you can and you haven't lost anything. There's very little sunk cost when you're in the cloud.

The other thing I want to offer is a reflection point, and it's something I've advocated for with my clients. Whether the organization really wants different data-driven metrics and outcomes from IT, or IT really wants a better infrastructure in place to support the business, or you've asked for clarity perhaps in your organization from an outside third party like a consulting firm, and you've gotten a 200-page guide on what you should do with recommendations and findings.

I would highly advise that what you consider a complete assessment or support for your changes that you're contemplating. Really includes recommendations that have a roadmap. So it's laid out against time, even if the timeline is wrong, that there's a chronological order to the change you're considering, and that you're also contemplating the what, the why, the how, and the who. That you've asked your third party all of those questions, that you've asked of your organization, or offered to the organization, answers to all of those questions. And that you treat them like hypotheses, identifying when you might learn as quickly as possible against that roadmap, that your hypothesis was wrong or right so that you can have mid-course corrections throughout your implementation.
I know that that may sound weird or maybe just a general strategy, but a lot of times I've seen clients and worked with clients who got the 200-page report and it didn't do this last piece for them with recommendations, a roadmap, timeframe, chronology of change, and that's just the last 10% that is so meaningful to know what to go do next.

And so I would encourage anyone in the position of maybe you're in the business side of things and you're trying to figure out where to go from here, how do you work more closely with IT to understand what they need so that you can get what you need? If you're in the IT department, to seek to understand and roll up your sleeves a little bit more around, "What is the business we're in, and how is it functioning, and what might they need that I don't understand yet?"

And then if you are working with a third party or maybe you are the third party, my challenge would be, do everything in your power to make a playbook that is really functionally useful to the organization that you're trying to serve. Thank you for listening. I'm your host Jess Carter, and don't forget to follow the Data Driven Leadership wherever you get your podcasts. And rate and review letting us know how these data topics are transforming your business. We can't wait for you to join us on our next episode.

Insights delivered to your inbox