Data Driven Leadership

AI Strategies to Authentically Supercharge SMB Growth with AWS’s Ben Schreiner

Guest: Ben Schreiner, Head of Business Innovation, US SMB, Amazon Web Services (AWS)

AI isn’t just for large corporations. Small-to-medium businesses (SMBs) can—and should—leverage the power of AI, too. Not sure how to start? Ben Schreiner, head of the Business Innovation team at AWS, is here to help. Ben joins guest host Justin Bolles to dive into AI’s effects on SMBs and lay out practical steps for getting started with it. He explains what SMB leaders need to know about security and data privacy, shares examples of real-world applications of AI, and demystifies concepts like AI models and hallucinations.

Listen On

  |  

Overview

AI isn’t just for large corporations.

Small-to-medium businesses (SMBs) can—and should—leverage the power of AI, too.

Not sure how to start? Ben Schreiner, head of the Business Innovation team at AWS, is here to help.

Ben joins guest host Justin Bolles to dive into AI’s effects on SMBs and lay out practical steps for getting started with it. He explains what SMB leaders need to know about security and data privacy, shares examples of real-world applications of AI, and demystifies concepts like AI models and hallucinations.

Interested in learning more from AWS? You can find more information in this blog post.

In this episode, you’ll learn:

  • How to maximize AI’s impact on your business
  • Why data infrastructure is crucial to successfully using AI
  • How to leverage existing AI models for cost-effective solutions

In this podcast:

  • [00:00-02:45] An introduction to the episode
  • [02:45-07:00] How to start leveraging AI
  • [07:00-12:45] How to train AI models on your data
  • [12:45-19:10] How to choose a cost-effective model
  • [19:10-22:35] How to mitigate the risks of AI
  • [26:35-30:15] Setting a foundation with data

Our Guest

Ben Schreiner

Ben Schreiner

Follow On  |  

Ben Schreiner, Head of Business Innovation, US SMB, Amazon Web Services (AWS), brings an unusual blend of Global, Fortune 500 and tech startup experience both as a technology customer and provider to his work at Amazon Web Services (AWS). He is an empathetic senior trusted leader who has been advising CIOs and business leaders for over 20 years.

Transcript

This has been generated by AI and optimized by a human.
Show ID [00:00:01]:
The power of data is undeniable and unharnessed. It's nothing but chaos.

Show ID [00:00:06]:
The amount of data, it was crazy. Can I trust it? You will waste money held together with duct tape doomed to failure.

Jess Carter [00:00:13]:
This season we're solving problems in real-time to reveal the art of the possible, making data your ally, using it to lead with confidence and clarity, helping communities and people thrive. This is Data-Driven Leadership, a show by Resultant.

Justin Bolles [00:00:31]:
Welcome back to Data-Driven Leadership. My name is Justin Bolles and I'm filling in today for Jess Carter. I'm a principal architect here at Resultant and I've been here for about five years. Today we are going to talk with Ben Schreiner, who is the head of business innovation and go-to-market strategies at Amazon Web Services. Our conversation is focused on AI and specifically AI for the small- and medium-sized businesses. We talk about current state of the technology, where things are heading, how that's going to impact individuals who are leading their organizations, and what sort of drawbacks there are to the current AI boom and try and figure out the difference between what's hype and what's real and how those things can affect your business. So I hope you enjoy the episode. All right, Ben, welcome to the Data-Driven Leadership podcast.

Justin Bolles [00:01:26]:
Let's go ahead and get started for the folks out there listening to give us a little bit of your background and your current position with Amazon Web Services.

Ben Schreiner [00:01:33]:
Sure, Justin, happy to be here. My name's Ben Schreiner. Been with Amazon almost five years. Have been in technology for, we'll just say a very long time. We've seen some things and some changes. I'm a little biased, but I head up the coolest team at AWS. It's called Business Innovation, and we talk to executives all the time about how they're innovating in their respective industries and how they're leveraging technology to do so.

Justin Bolles [00:02:02]:
Okay, great. So today we're here to talk about AI, its effects on small/medium businesses, and just how the most recent generative AI boom has changed the technology sector in general. So can you give us some thoughts, initial thoughts on current state? Where are we with AI as a technology and as a usable product at this point?

Ben Schreiner [00:02:24]:
The interesting thing about AI is while it has gained popularity recently, it's been around for a very long time, in the decades, and Amazon specifically has been using it for decades to make our own operations more effective and efficient. But the recent boom in popularity stems from generative AI and the ease of use and kind of the consumer nature of it has created an awareness that didn't exist for the broader AI solutions that was more technical, folks understood it or were leveraging it. And so it's become more popular, more mainstream, is probably the right way to say it. And I would say it's outstanding, because now we're having conversations about technology and about how the technology can help leaders be more effective and more efficient, engaging their customers, their sellers, their inventory. Many, many aspects of their business could be enhanced with AI and generative AI. We like to say, and I say this often to leaders, we all remember that the Internet boom. And I think we could all argue that the Internet has changed a lot of industries, and we believe that this technology not only will change all of the same industries, but will change it faster than the Internet did. And so Internet was probably, let's call it 25 years ago, and again, a lot of change, but generative AI, and just how fast it is evolving suggests that it will compress the time to have an impact compared to the Internet change.

Justin Bolles [00:04:02]:
Okay. And for a business owner or a leader in a small/medium sized business, what's the first step? How do we even get started in leveraging AI?

Ben Schreiner [00:04:13]:
I'll preface it with this. I'm very empathetic to the audience, and I will say this, that hopefully whoever's listening is not in this case. But I've talked to probably thousands of leaders now about generative AI in the last, call it 18 months, and there is a healthy fear of missing out. There is so much hype and so much excitement and energy around AI and generative AI that we are finding a lot of executives asking their technical folks, or whomever, their partners, hey, what are we doing with AI? What are we doing with generative AI? And for me, that's akin to running around with a hammer looking for a nail, which we don't recommend. And instead we would recommend the best way forward is actually to start with your business, what problems you have. And all businesses have a couple. They all spend too much money on something. I don't know what it is, but ask any sales leader or leader of a business, they spend too much money on something.

Ben Schreiner [00:05:17]:
Fill in the blank. They also spend too much time on certain things. And that time is something that I really want to challenge your audience to think about. Because if you can compress the amount of time you spend, and I didn't say waste, but that you spend, if you can compress the amount of time to the next sale or the next product being launched, whatever time compression you can make you now find time to be able to do other things or to do more things. And unfortunately, none of us have figured out how to go back in time or forward in time. So if you can find ways to find more time, then small and medium businesses can actually grow faster if they're able to serve more customers and spend too much money, spend too much time on things. And if you can solve either of those things with technology, great. And so if you start with the problem, then look at the data you would need to solve that problem.

Ben Schreiner [00:06:13]:
Do you have the data to solve the problem at hand and then look at the technology or the model as sort of that third step that would be our recommendation is to work backwards from a real problem that you can quantify as worth solving, because then you can justify the time and money you're going to spend on the technology to solve it. And then what we're finding, too, in a lot of my conversations, is folks start out and they're very excited about what generative AI could offer. And then they try one of the tools and they're interesting. It's kind of right. Maybe gets a B, maybe gets a C on the answers. But if you click two or three layers down or your questions are more specific, that's when the models, the generic ones that were trained on the public Internet, start to fail. Because those models don't know your customers, they don't know your products, they don't know how you've served your particular industry. So their knowledge is vast, right? The Internet's a pretty big place, but it doesn't know your specifics.

Ben Schreiner [00:07:19]:
Right? And so we see people getting a little disappointed in the initial interactions. And that's where we provide a bunch of coaching that says you have this data about your company, your customers, your products, your services. If you connect that to one of these powerfully trained models, now you can get more specific answers that are tailored to your customers, your products, your sellers, your particular context and circumstances. Now, that's a little more involved than the app you downloaded on your phone, but people are getting to that stage, and that's a real promising stage because now you're solving real business problems with data that's relevant and contextual. And it has a lot of promise to, again, compress some of those places where you spend maybe too much time doing certain things.

Justin Bolles [00:08:12]:
Okay. And talk a little bit about the process of training those models, right? So if I'm a small business owner or medium-sized business owner, obviously I have limited resources. You're talking about I spend too much money on something. I spend too much time on something. What's the process of training one of those models on my business data? And what sort of investment should I expect to be able to make before I get those good results out of that training model?

Ben Schreiner [00:08:36]:
Yeah. So, let's start with this. The models that we're making available, first and foremost, we want you to have choice. You're a business owner, and if you only deploy one model and you don't compare it to other models, how do you know if you're getting the best results? Because all you have is the results from one particular avenue. So we make many models and some of the best in the industry available to allow you to compare your use case and find the right price performance for the problem that you're trying to solve. So that's priority number one is that. Number two, I don't want to get too technical, so I actually don't want to go into the training of the models because that's advanced math and lots of stuff. And I don't have a PhD, and I don't know that many of your audience members have a PhD, nor do we want them to have to have a PhD.

Ben Schreiner [00:09:28]:
And that's probably the biggest benefit of leveraging one of the larger language models. Call it a foundational model that's been trained on lots of different information and then augmenting it with your own data. So connecting your data sources so that the model and the engine has this repository that it's been trained on, plus your information. And so it's not really retraining the model in totality. That is very expensive. I wouldn't recommend it unless absolutely necessary, but augmenting an existing model with your own data is very feasible. And again, we've made tools, including agents, that connect to 40 of the most popular software as a service things, think knowledge bases, think Salesforce, those kinds of common applications, again to make it easier to pull the data together so that again, the solution has the right information to be able to give you the best possible answer.

Ben Schreiner [00:10:29]:
So don't want to get into the nitty gritty of how expensive and complicated it is to actually train your own custom model. That could be cost-prohibitive for many organizations, but it also may not be necessary. So I'd encourage folks to start with one of the popular high-performing models, access to some data that you want to control and then look at the answers there. The other thing I want the audience to really be aware of and appreciate is data. You got the best-named podcast here for that. So I know your audience appreciates that, and the important things with AI is where is the data and who has access to the data, and how secure is said data. And so our approach at AWS has been very from the get-go around security and privacy are paramount.

Ben Schreiner [00:11:20]:
When you leverage our tools and techniques, you are going to have control over the access to the data. It's up to you to set those permissions and security, again, is paramount. Your data is not going to be used to train one of these underlying models or be made publicly available and potentially leak information about your organization beyond your control. And so we've taken a very security-first and data privacy- and protection-first. Our customers expect that. I would suggest that your audience expects that. And so I would just make sure that they understand where the data lives and what has access and the controls. The other thing that's important to us is that data access.

Ben Schreiner [00:12:02]:
And let me give you a scenario. If you were to connect a model to your corporate information and you want all of the people in the organization to be able to ask the application questions, you may not want everyone to be able to ask how much money the CEO makes, and that may be in the HR data, but we wouldn't want that to necessarily be one of the answers that the model provides. Now, there are certain groups that do have access to that information and they may need to access that. So how do you make the access to the information and that control mechanism of the model and the data consistent with data access in the organization yourself? And that's really important. So that if I am asking questions of the model, I'm only going to get answers inclusive of the data I have the rights to, and I'm not going to get access to the CEO's salary. But somebody in HR asking questions of the model has different access rights and would now be able to get maybe that information. If you follow my train of thought there, it's really important to, otherwise you have to go lowest common denominator that everybody has access to, and then the solutions are only that good. Because you've gone lowest common denominator, you would lose out on some of the potential benefits to the organization as a whole if that's your approach.

Justin Bolles [00:13:30]:
To encapsulate what I think you're saying is, you can get down to what would be in a more traditional database model, something like row-level security. You can get down to that row level, that granular level, based on your permission set, correct?

Ben Schreiner [00:13:46]:
Based on who's asking the question and the data they have access to, the model is going to only use and respond with that data. So, yes, that same security, data privacy extending into how the model responds to you.

Justin Bolles [00:14:03]:
Okay, you talked a little bit about how there are all of these different models. Can you kind of give us an overview of how those different models maybe work differently or how someone would go about choosing a model or set of models to compare to one another?

Ben Schreiner [00:14:19]:
It seems like almost on a daily basis, somebody announces some ginormous model. Right? You've seen, and I'm sure your audience has as well. It's so exciting, if I'm honest, just how fast things are evolving. And so we have partnerships with many of the large providers. HuggingFace is a large provider of open-source models. We have announced support for Llama 3, we had Llama 2, we have partnerships with Anthropic and AI 21, and Stability IO some of the more famous and top-performing models.

Ben Schreiner [00:14:52]:
Many of the models come in different sizes, and so you'll find ones that are, let's say, 7 billion, then you have 30 billion, and then you're starting to get even bigger and bigger. Everybody should take away: Bigger model, probably gonna be a bit more expensive to run, but probably able to do more things because of the breadth of information that it was trained on. And so this is where it becomes very important, Justin, for anybody who's looking to leverage a model to truly understand what problem am I trying to solve and then try that problem against several models, right? So you could try it against a great big one, a medium-sized one, and a small one. And if the performance on your particular use case is similar, like within a tolerance range, then go with the smallest model possible, because your price performance, your return on investment for solving that problem will be very high. If you instead deploy the biggest model and try and solve all your problems, I think you'll find out that people might be writing haikus with the model, and that may not be the best use of corporate funds. And you may reel back access because you're finding a big bill, but maybe not a big payoff because you're using it for purposes that may not be as advantageous. And so, again, discrete use cases, I think you'll find people running models on demand, and you'll tailor a model to the particular use case.

Ben Schreiner [00:16:24]:
The models are all trained on different data, right? And so they're gonna perform differently. And it's important for you to be able to compare those outcomes and outputs. And so we offer a way to assess models next to each other, to be able to provide the same prompts and then compare the results of those so that you as a user or a consumer can actually make an educated decision between model performance.

Justin Bolles [00:16:53]:
You rattled off quite a few different models there. Is that where you feel AWS's approach is different than maybe a Microsoft who's tied to like an OpenAI or Google with their Gemini model? Is AWS's approach to that different in that way?

Ben Schreiner [00:17:07]:
Yeah, Justin, I'll say we've had a different approach from the beginning. So we look at AI in three distinct layers, and the models are kind of in the middle layer. The bottom layer is technology. So we've been investing in our own chips for training and for inference. You may have heard of a company called Nvidia maybe, and you know how high in demand their chips are. We realized that we probably wouldn't be able to meet customer demand without making our own investments in chips to increase our capacity. In addition, the chips that we're designing use less power and less water, so they're also more sustainable because we're responsible for those data centers. So it serves multi-purpose for us.

Ben Schreiner [00:17:52]:
But that's at that base layer and it has very high price performance, those chips. The next layer up is indeed those models and how you interact with them. We took an approach to the market that we don't believe one model will rule them all. We just don't think that's feasible. And instead, no different than our philosophy on Amazon.com, where we think choice is the ultimate motivator for customers, is I want the most choice, I want the best price, and then I want the best and fastest shipping. You extend that into AWS, and we want you to have the most choice, the best price, and hopefully instant access to be able to solve whatever problem you might happen to have. And then the next layer up, so the models are down there. The next layer up is actually what we call an application layer.

Ben Schreiner [00:18:39]:
And we've had AI applications for a very long time that unfortunately most people don't know about. And so we're eager to get it out there that actually, the mechanism, the engine to personalize… if you've been to Amazon.com and bought something and we say, hey, other people like you bought these other things, that personalized recommendation engine, we actually make that technology available to any AWS user to just interact with the model that we've already created. And so we have also introduced our own models. As you can imagine, we have a lot of data and a lot of experience of our own. We've launched Amazon Titan. We have an image model and a text model. No customer data was used.

Ben Schreiner [00:19:26]:
They're all data that we've gathered from our own operations. In addition, we've trained models on running AWS for the last 17 years. And so we have a model embedded in an application called Amazon Q inside of the AWS console that you can use to ask questions about using AWS and getting the most value out of all of the services we have available. So it becomes a technical advisor for you inside the console. Again, trained on running AWS for the last 17 years. You're seeing it in a lot of different places, but our approach has been one more of choice and openness. Now, we've made a similar investment in Anthropic, and we have a partnership with them, and they're building and running their models on top of AWS, which is a great partnership for us and is propelling the AI community forward. But again, we want to make the best models we can make available to our customers.

Justin Bolles [00:20:25]:
We've talked a lot about the upside of AI so far. Let's talk a little bit about some weaknesses. Can you tell us about the term hallucination, what that means, and how we can mitigate that when we're working with AI models?

Ben Schreiner [00:20:39]:
Hallucinations. Is a real thing. The models will inevitably tell you an answer that may or may not be true. And one of the things you need to be able to do is to identify that, whether that's to through humans, reviewing the answers to validate, or having guardrails, controls in place to tell the model if it doesn't have enough data, then tell me that, versus what I would say, lying to me and giving me an answer that you've made up. Right? And so one of the risks is this, providing a false information or copyrighted information or information, again, that has been incorrectly summarized. And that's a real risk, especially if you're in a regulated industry or it's interacting with a customer. There's been many use cases publicly known where models did things that the people who released the model wish they hadn't. And so for your audience, the advice I give is you need to ask yourselves, can I control what the model or the application can and can't do?

Ben Schreiner [00:21:44]:
If you don't have a good answer to that question, then you probably need to dig a little deeper into the risks that you're assuming if you do not have control. And again, there are many lawsuits and cases that would suggest that saying I didn't know the model would do that is not a good defense, and it won't be. Right? The rules and regs that are being worked on by Congress and others, I think we can reasonably expect it to be something along the lines of, are you in control? Prove to me that you're in control. And then where did you get the data? And can you trace the answers back to the data that was used to provide it? I think those are some reasonable expectations that any decision maker, any leader right now should have. And ask those questions of anybody working on their behalf and make sure that you have good answers to those questions, because even if there aren't regulations now, they're coming and you want to be able to have good answers to those kinds of questions. So we have again taken an approach to make sure that those things are easy to answer. We created Amazon guardrails to allow you to control.

Ben Schreiner [00:22:56]:
No hate speech, no bias, like really get into the things and set the parameters. Like if you don't want the model to ever respond about a competitor to a customer, right? Because when you put these applications out there, there are people who will try to break them and try and get the model to do things that you didn't intend it to do. And so before you release something out into the wild, I do think it's prudent to make sure you have good guardrails around the scope of what that model is intended to do, because your users, your customers, whoever's interacting with the model, inevitably will do things that you didn't expect and you want to make sure that you are in control over the results, or you may get some surprises that you may wish to have avoided.

Justin Bolles [00:23:44]:
About ten years ago now, we watched IBM Watson play Jeopardy. And that was sort of like the first big AI thing came out. Ten years later, we're looking at more of a greater availability and usage of large language models. Crystal ball, what does the next five years look like, ten years look like in the AI realm?

Ben Schreiner [00:24:05]:
So I believe just about every customer interaction will be impacted by AI and generative AI. I think how we interact with companies, products, services will just get better leveraging these tools and techniques. I see that coming. I'll give you a perfect example. I'm sure you've called the tech support desk at some point in time in your life and you've been put on hold like I have. And you know what they're doing? They're searching for the answer. And we've got an AI tool that hooks into our Amazon connect call center, where the agent, which is AI, is listening to the call and can be searching for the answers as the call is happening. And prompt the call center agent with the answers live, kind of like a teleprompter for the newscasters, but live as it's coming in, the beauty there is it shortens the time you're on hold and it reduces the time to get you the answer that you need.

Ben Schreiner [00:25:08]:
And at the beginning of the show, we talked about, what are we trying to do is reduce time. Right? And happy customers reduce time. All of those things are real benefits. The other example I think your users would be interested in, we've got a use case that we think has a lot of promise, which is preparing for a board meeting. Almost all small and medium businesses have a board or have investors or have somebody that the executive team has to answer to. Could be a private equity firm, whomever. And they spend a lot of time, again, preparing for that meeting because it's an important meeting.

Ben Schreiner [00:25:44]:
Right. And what if we could help you prepare for that meeting and compress that time by looking at all of the information you've provided to the board over the last couple of years, look at the information you're about to provide to the board and then ask the model. What questions is the board going to ask, and then what are the answers to the questions that the board's going to ask and allow the executives to have a role play with the model about this next update. And I think some of your listeners would value that a great deal, to be better prepared for that meeting and anticipate some of the questions that could be asked based on the content and the historic content that was provided. So we think that one's got a lot of promise for executives just to help them. And it also makes AI and generative AI real for them in that way. Those two are examples that we're particularly interested in. And then I'll give you one last one.

Ben Schreiner [00:26:40]:
A real customer that has a real product out there. The company's called Blast Motion, and they have just an absolute ton, a treasure chest of videos of swings. So think golf swing, think baseball, think softball. They have all these swings. They've come up with a training program to allow you to videotape yourself swinging and then get a training program to show you how to take corrective action and basically give you coaching. Let's just say not all us parents can afford a private swing coach for our son or daughter to play baseball or softball. And many of us have spent more money than I care to admit on lessons. And so what if you could have a swing coach, you know, kind of in your pocket almost and allow you to get coaching and progress? We think it'll actually improve.

Ben Schreiner [00:27:32]:
Some of those folks that, again, don't have the economic means, can get a higher level of coaching, provide athletes a better chance to maybe go to college or play professionally because they're able to get some coaching earlier in their career in a more affordable and accessible way. And that one has a lot of promise. AI is going to change how we all learn in the not too distant future as well. I'm really looking forward to that.

Justin Bolles [00:27:57]:
Interesting. Okay, last question for you. What haven't we talked about yet? What sort of things do you want to get out there to the audience that we haven't touched on just yet?

Ben Schreiner [00:28:06]:
Great question again. I'm going to tailor this to the audience. Probably out of ten conversations I have about generative AI, eight of them end up being actually a data problem. Not an AI problem, not a generative AI problem. It's actually my data is scattered all over the place. I don't have a good view. Data proliferation is truly getting our arms around your data is priority number one. It is foundational to truly getting the most value out of AI or generative AI is you've got to have your arms around your data, which I'm sure your audience would agree.

Ben Schreiner [00:28:40]:
It's a first step in the equation. We provide a lot of tools to make that easier in the cloud and in AWS to get your information and pull it all together and then start to look at these use cases one by one and make sure you're picking the right model, the right tools. But I would encourage folks to learn to get started with working backwards from that meaningful use case. But often it's the data that is the hiccup and the place where we actually need to put in that foundation before you can start to build some of these cool new capabilities. So if you've got your arms wrapped around your data, then it's go time. And if not, take comfort in there's a lot of people that don't have their data under control, but you need to get help and you need to make progress on that. Or folks that have that and start to build AI are going to start to separate themselves from the competition and you're going to start to feel like you're falling behind and it's going to happen faster and faster. So we encourage folks to engage, partner with someone who's done it before and really start to look at how do you make your business run more effectively and efficiently with data and, and then these AI capabilities on top of it.

Justin Bolles [00:29:49]:
Awesome. Well, thank you so much for your time. Ben, thanks for joining us here on Data-Driven Leadership.

Ben Schreiner [00:29:54]:
It's been a pleasure. Thanks so much for having me.

Justin Bolles [00:29:56]:
Don't forget to follow Data-Driven Leadership wherever you get your podcasts and rate and review letting us know how these data topics are transforming your business. We can't wait for you to join us on the next episode.

Insights delivered to your inbox