CrushBank - AI for MSPs

Reset your expectations for AI

April 04, 2024 CrushBank Season 1 Episode 5
Reset your expectations for AI
CrushBank - AI for MSPs
Show Notes Transcript Chapter Markers

Reset your expectations for AI.  David Tan discusses the current state of AI technology with Evan Leonard.  He focuses on the importance data management and the need for governance, with excellent advice for MSPs for their AI journey.

Speaker 1:

It's been another eventful few weeks in the world of artificial intelligence. In just the last week alone, the governor of Tennessee has signed the Elvis Bill meant to protect musicians from AI-generated content, Congress has banned the use of Microsoft Copilot by all members and staff, and Amazon came out and admitted that their AI-powered cashierless stores were really just offshore workers remotely monitoring shoppers via camera. Is it time to reset our expectations around AI? I sat down with my longtime business partner and friend, Evan Leonard, to talk about all that and more.

Speaker 2:

This is the Crush Bank AI for MSPs podcast. So, david, the title of the webinar Reset your Expectations Around AI. What is it that really frustrates you around the expectations?

Speaker 1:

So this is going to come off as a bit of a crazy way to start this webinar, especially when you consider the topic and what we do as a business. Our entire company is built on AI technology, but my single biggest frustration in talking to people and working with customers and giving lectures and holding sessions and educating people is that people just plain assume it's better than it is Functionally. They believe what happens behind the scenes, in the black box, as we call it for AI, is just pure magic and I know that comes from a place of unfamiliarity and awe in how this stuff has just emerged recently. I mean, it's only what now, 18 or so months since OpenAI released ChatGPT and sort of changed the world and opened the floodgates on generative AI, and people just assume this stuff is incredible and it works out of the box and it's, like I said, better than it is.

Speaker 1:

The problem with that is that that causes all sorts of downstream issues for an organization trying to embrace this technology. It means that you assume the output is correct when it doesn't. That's my single biggest issue with AI when you sort of let it loose in your organization and let your employees or your staff or your team or whoever work with it. If they're not experts in what they're asking AI to do, then how can they vet it when there's no guarantee that the technology is right? It means you're not properly training those employees on how to create inputs, how to do prompts, how to prepare data basically, how to get your organization ready for AI.

Speaker 2:

David, you love to tell these stories around. You know these like some things that hit headlines, some things that don't you know. You're a wealth of just information gathering. But I remember the time you told me about the attorney who was using OpenAI and what it came back with a fictitious case and all as it was like a prime example of how people who just don't vet it out don't understand some of these responses.

Speaker 1:

Yeah. So that's a great story which most people probably familiar with, so I'll tell it really quickly. But I'm also going to tell another story which is made less headlines but is more impactful, I think, when talking about-.

Speaker 2:

You told me I wasn't.

Speaker 1:

No, no, the attorney one is fun. Long story short. A New York actually based attorney was trying to prepare a brief for a court filing and he went to Chad GPT and asked for precedents around a certain case. And Chad GPT spit out an answer that included a couple of cases that he could use as precedents for his filing. And he prepared the brief and he submitted it to the court and everything was all fine and dandy until someone realized the Chad Chappie team made up the cases completely, hallucinated the precedents because, quite frankly, they sounded like examples of cases that fit the narrative. And that's what large language models do they create language that sounds like it should be accurate. It doesn't always have to be accurate. It's not always grounded in the truth, which I know is something we'll talk about a little bit more as we go on the other one that I find kind of interesting, which I love to tell it's not about hallucinations, it's about bias, which is another really big problem in AI in general, which, again, I think we'll talk a little bit more about.

Speaker 1:

But years ago, three or four years ago, amazon built a model, an AI model, to use to evaluate resumes from candidates, and in order to do that? You need training data. Data is going to be a continuing theme of this webinar, so be prepared for it. So, to get that training data, what did they do? They used the resumes of the employees they've hired over the years, and this was on the AWS. On the technical side, they used the resumes of employees they'd hired over the years and they graded them so Evan's an A, david's a B, plus so-and-so and they used that to train the model and evaluate resumes they came in.

Speaker 1:

The problem was, if you think about tech traditionally and we know this as a couple of people that have owned a tech company it's been a very male-dominated space. We went for years without finding any female tech employees because they just weren't out there. Fortunately, that has changed, which is great, but that hadn't always been the case. Anyway, long story short, amazon fed all of their resumes in and the model started to identify the fact that only men made good tech employees. So it immediately spit out and rejected any resume from a woman, any resume that looked like they worked for a female-focused company or went to a female-specific school, things like that. So that's an example of how the bias gets baked into the model completely innocuously we're going to talk about Google. I know at some point, the things that happened with the Google model. With the images they were creating, they look like someone maliciously tried to train the model to be woke. That's not how these things work. It just happens naturally, it happens organically and it can be a little bit disconcerting, obviously.

Speaker 2:

Yeah, well, I mean, the humans always have some input in the training and you know, a lot of people just may not have certain experiences or understandings and, like you said, it happens sometimes by accident or a lot of times.

Speaker 1:

Yeah, no, it's generally by accident, of course.

Speaker 2:

So I know you've spoken about this like there's limitations to AI. What are some of the limitations? I mean, everyone thinks that, like AI is this magic and you know, just you know, always comes up and spits out answers and everything else, but they don't really understand the underpinnings of it. So what are some of the limitations here?

Speaker 1:

So there's a couple of different ways we can talk about limitations around AI. First, I get a little bit technical here for a minute and at the risk of boring people, I won't go on a computer science seminar lesson here. But the technology that drives all of this is what we call GPUs graphical processing units and they are computer chips. I'm going to oversimplify it. They're computer chips that are very good at mathematical calculations, and that is what is required for AI. So the first limitation around AI is, quite frankly, we don't have enough GPUs. There are just not enough on the planet. It's why NVIDIA's stock is at 900 or whatever it is today. It's why NVIDIA is one of the most powerful and valuable companies in the world and it's why everyone from Microsoft to OpenAI to Amazon and Google are trying to get into the space of creating chips. Right, you wouldn't think that all these cloud, born-in-the-cloud companies would become hardware manufacturers, but it's that critical for it. The other way we could talk about limitations again manufacturers, but it's that critical for it.

Speaker 1:

The other way we could talk about limitations again are what the actual software can do. So first and foremost, like I said, it has to be trained. There is data and you have to teach the system how to work and understand inside your own domain of knowledge. So, for example and I don't want to make this a crush bank ad by any stretch but we spent hours and days and months training models around IT support, understanding and knowledge and we're able to build domain-specific models that understand that technology and that language, quite frankly.

Speaker 1:

So if you go and ask an untrained model like again, go to ChatGPT and ask it a question about a specific system, it may come back with an answer because it has that information that it pulled off of Google or off the web or wherever, but it's not trained to understand the actual how.

Speaker 1:

Things like bias and hallucination and drift all get baked into the conversation, all get baked into the model. There's this term we use. It's called the black box problem, where you look at a set of inputs and you look at a set of outputs and you don't understand why the model or why the AI came back with those outputs. A great example of that is again from the early days of ChatGPT, when the New York Times writer basically tricked Bing into convincing it to leave his wife or try to leave his wife and run away with him, and Microsoft OpenAI couldn't explain why that happened, so I think it's critical as an organization, that you understand some of the limitations and how to overcome them and, more importantly, how to leverage them and how to really take advantage of the technology, rather than just again assume it's perfect and let it be, that's great.

Speaker 2:

And you know, being in infrastructure with you for the last 30 years, you know, and I was never a technologist, I had to, you know, really understand infrastructure and applications and databases and all these things that I never thought I'd need to know. You know, growing up and everything, and now we've kind of switched gears here and now doing, you know, basically essentially software development. And there's so many people that I've talked to, whether it's in ypo or other avenues, and they they say, well, you know, I have my development team looking at the ai, right, and they're well, they're developers. So I mean, why not right, they should be able to handle ai or really fully understand this.

Speaker 2:

And you and I both know that there's plenty of times that people have called us up and said, oh, I won't say the word, oh, my, you know we don't know what we've done here. And you know, have we exposed our data, all these other things? And you know, I'm amazed at the different skill sets that we need to really implement AI as an application, or what people need to do to implement it, to infuse it into their applications, or what they're trying to do with it. And you know, help us understand, like, what kind of skill sets do you need if you're thinking about, you know, coming up with an AI solution or somehow using AI to help you with anything from data to information, to metrics and all that kind of stuff?

Speaker 1:

So first I'm going to talk a little bit about the problem you described and why I think it happens and I think it's kind of interesting, and we will talk about the skill sets because obviously they're they're numerous the things that you need to have an expertise in. But I'm going to use an example, and I was actually just having this conversation with someone this morning. We were talking to a big financial firm. They're not a client, they were just someone we know. They were actually someone that did some early investments in our company and they do a bunch of internal development and basically they described that they were having some issues getting the outputs, the performance, what they wanted, out of some of the AI solutions they were building internally and they have a huge development shop and the way I explained it to them and I'll take it and I'll make it a little bit more personal, let's say, I came to you, so we started this company back in 2016, as we talked about, and we've been developing ever since now, for the last seven or eight years, let's say three or four years ago, I came to you with an idea and I said, hey, I've got this great idea around mapping functionality, geospatial technology, that we should put into CrushBank, ignore the fact that it has nothing to do with our platform and just go with me for this example. So I came to you and said I have this great idea around mapping and we're going to do X and Y and you say I love it, let's get it into the pipeline, let's get on the roadmap, let's get it deployed. What we would do is we would build a bunch of the front end code and we would throw a bunch of API calls over the fence to probably Google Maps or Garmin or wherever getting the GPS data from, and we would get back answers and we would interpret the answers and we would display them inside the application and we would have a ton of faith that that mapping information that we got back from we'll just say Google for now was accurate, right. So in other words, let's use an example at our old MSP. If we wanted to plot someone's path a tech that was on the road and he has to go to four different locations in a day, you know we can maybe build an algorithm to optimize that with mapping data and we'd be confident. The last thing I need to worry about is the mapping data. The problem is that all of these companies think you can do the same thing with all these APIs that leverage large language models and I'm not picking on them, but I'm going to use OpenAI as an example. So people think that you can just throw an API call over the fence to OpenAPI, give it a bunch of data and you will get back an answer that enhances your application or is accurate or is valuable or is useful, and that's just simply not the case.

Speaker 1:

There are a suite of skills that you need to understand in order to leverage this stuff that is way beyond what most organizations have today and, quite frankly, it's not a surprise they don't have them, because this is new technology. Two years ago, no one was talking about this, so you didn't. If I was on a webinar two years ago and I said to everyone listening that they needed to hire prompt engineers, no one would know what I was talking about. I can't. Maybe I didn't even know what they would think of if I said something like that, but it's as simple as that. It's as simple as having people that understand how prompts work. It's as simple as having not as simple but you.

Speaker 1:

But you need to have data scientists. You need to have people that understand data and how to manipulate it. You need to have subject matter experts. So we've built applications around delivering IT support better, and our entire company is made up of people that have been or spent some time in managed services or in IT support. Because I know when I see a ticket and I have to create a summary of it, I can vet what that summary is. If you show me a medical record and say, write a summary of that, I'm completely out of my depth. So you need subject matter expertise. You need people that understand data science and how that works. You need people that can make these prompts, understand how to tune these models, and you need to start with the foundation that these people don't exist in your organization today. Maybe there are people there that have the expertise or have the capacity or the capability, but they need to be trained, and just taking a developer and saying use these API calls is a recipe for disaster.

Speaker 2:

And just from again our experience. We're on our fourth or fifth iteration of Watson, right? Just going from semantic service to generative, to everything that we can do with it today. You know, the upkeep of this stuff, the upkeep of the applications you're connecting to, the upkeep of the baseline, technology continually changes, right? So if you're not continually in that business, in that understanding of the technology, I mean things go sideways pretty quickly, right?

Speaker 1:

Yeah. So the other thing that changes is the results that come back from those API calls. So let's say, for example, you build something and you put some prompts together and you have a great solution that's bringing back valuable data, valuable answers. If this was a database or a spreadsheet or some sort of mathematical, I know that every time I do select star from customers where city equals New York, I'm going to get a list of the New York customers. That's a database reference for my non-technical friends on the call. I know if I'm in Excel and I say two plus two, I know I'm going to get four friends on the call. I know if I'm in Excel and I say two plus two, I know I'm going to get four.

Speaker 1:

If I use we'll change. If I use Anthropic from, if I use Claude from Anthropic as my large language model and I send the same prompt over today and three months from now, I'm not guaranteed to get the same answer. There's going to be different data. The technology is made to generate different answers and that's the thing that people need to understand. It's not only they need to upkeep the applications. You need to monitor the answers you're getting back from these systems to make sure, those things we talk about hallucinations, bias, drift are not making their way into the system, and it's why I don't want to steal your thunder. But it's why we talk about governance and why that's so important for what we do.

Speaker 2:

You know, I think this next question really kind of ties in nicely about tuning the model right. So I hear you talk about all the time and, you know, to your frustration sometimes because you're trying to teach. You know, from myself to the sales team, to marketing, to everyone you know what the importance of prompt tuning. So they are tuning the model, so you know talk to us a little bit about that and what the importance is.

Speaker 1:

So this goes again back to what we talked a little bit about with subject matter experts and understanding your environment and your domain of knowledge and your industry. But there's large language. Models are, for simplicity's sake, a collection of a lot of data. They're used with mathematical representation, they're tokenized, they're plotted on a Cartesian plane. There's a whole bunch of high-end mathematics that goes into it. But for simplification, it's just a lot of data and in order to get optimum results, that data needs to be the data that is relevant to your use case, your example. So let's say again I want to build an IT-specific application using OpenAI and I'll just say I use GPT-4, their latest and greatest model. It's not custom built for what we do. So there's a few options of how you can attack that. You can build new large language models from scratch. Good luck with that. I mean you could probably, in this crazy world we're in, you could probably get someone to fund it if you have a good enough story. But we're talking millions and millions of dollars and technology and time that you just don't have access to right Things like the GPUs that I referenced. So let's just take building models off the table. You can fine tune a model. Again, this is expensive, right? Because think about the GPT-4, for example, has 1.7 trillion parameters in it. Do you know how much you have to add in tuning to that? It's not fine tuning anymore in it. Do you know how much you have to add in tuning to that? It's not fine tuning anymore. It's hundreds of millions of examples. To move the needle on that, like 1.7 trillion is a lot. I know I laugh when I say that, but that's a lot of parameters to try and move the model on.

Speaker 1:

The reason we choose IBM and this is again, I love them and our partnership with them is crucial to what we do, and I'm a big believer. But this is not meant to be an IBM advertisement. What we do with IBM is this technology called prompt tuning, where we essentially can tune a large model with a lot less data. So for our summaries, for example, for our summarization technology, where we look at tickets and create summaries of them, we did that with a couple of thousand examples, as opposed to having to put tens and hundreds of thousands of examples and maybe even millions together. So prompt tuning, which was a technology that was really pioneered by MIT and IBM, is a really powerful way to do that, but it's important.

Speaker 1:

But the other part of that which I think is important for people to understand, which is again one of the key takeaways I'd like people to have from this for people to understand, which is again one of the key takeaways I'd like people to have from this, is that size doesn't matter and I try not to laugh when I say that, but please don't take the double entendre reference to mean anything. The bigger isn't always better. When it comes to large language models, sure, if you want a large language model to help you create a novel or to write a play or even do some sort of marketing exercise with you, then sure, a lot of parameters is good because you need to understand how people think, how they talk, how they act. But we got much better results out of a 3 billion parameter model than we got out of a 20 billion parameter model. So one seventh of the size, because it was custom built and we were able to tune it much more acutely to what we were looking to accomplish.

Speaker 1:

And I think that's important to remember, because when you work with, if you're building an AI solution and you're going just to use the OpenAI APIs, that's a mouthful. They only have a few models, and that's not a criticism of OpenAI at all. They got some of the best models on the planet but we have multiple different models in our system for everything from summarization to resolutions, to chat, to identity, to sorry, to recognition of identities, to classification. They all have different technology needs and they all use different models. So I think it's crucial to understand that.

Speaker 1:

That kind of goes back to your previous question about the skill sets. It's important to understand that it's not one model fits all. Go to Hugging Face, for example. Hugging Face is the kind of the repository, the open source repository for large language models. There's like 80,000 of them on there some insane number. I can assure you, people aren't making them because they're bored. They're making them because they all have different use cases and different needs. So I think it's really important again to understand what you're working with and how you can leverage these sort of different sets of language.

Speaker 2:

You know there's a yin and the yang to everything, right? So I think we talked about the yin just now and I think we're about to talk about the yang in certain respects, right? So that's data and the real importance of this aspect, right? So one of the things that we do is we create, you know, a private data lake for companies and with everything you just spoke about, right, how we train, how it's your own data and you know the importance of that, right? So you know, we always have this expression over at Crush Bank. It's all about the data and you know, when I hear that, I always kind of chuckle. And you and I, you know, growing up in the 80s and 90s, we watch all the same movies, we have all the same references to movies and everything else, and one of the movies that was a little bit of a sleeper to me and to you was this movie called Sneakers that was out in 1992.

Speaker 1:

It starred Robert Redford and Robert Redford a classic Ben Kingsley.

Speaker 2:

There's so many great actors and actresses in that movie. You would be amazed. And if it's not on your list, put it on your watch list.

Speaker 2:

But there one dramatic scene where ben kingsley and and um robert redford they used to be best friends and you have to watch the movie and and they're they've, you know, not kept in touch for various reasons, and it's a dramatic scene towards the end and and and ben kingsley is like, so frustrated and he's like it's all about the information right, and he kind of says it's the about the information, right, and he kind of says this to Robert Redford and then and you know, if you know the plot of the movie, it's very dramatic, it's a very bold statement and I kind of feel it's the same way here, right, so it's about the data and the difference between having the data you know, having good data versus data. That's not so good, right, because that can drive different types of results, right? We've seen it with our clients, where you have some that are just amazing, have amazing data and they have tremendous results right out of the gate, or they're on this mission, this drive, to say you know what the future is all about? Data, right, this is our biggest asset.

Speaker 2:

It's all about the intellectual property and what we can do here. The future is all about data. Right, this is our biggest asset. It's all about the intellectual property and what we can do here. The future could be really amazing and we have to drive this into our organization and the ones that do it are just, they see tremendous results, whether it's in our business or you know some of our friends who are doing some other types of use cases like that's where this is so vitally important, right? So what does all this mean and how is the AI make a huge impact or, you know, not such a huge impact around data? I know you're just like a wealth of information around this stuff, so give us some insight on it.

Speaker 1:

So when I talk to people and I always try to give some advice, and one of the first things I always say is exactly that it's all about your data. Right, you can't have AI initiatives in your organization if you can't get data to feed them. Now there are cases where you can leverage data from somewhere else. Right, let's say you are in some sort of a business that is driven by weather patterns. Right, You're a coffee shop and you sell more hot chocolate in the winter months than you do in the summer months, and you want to figure out how to order your hot chocolate from your manufacturer, from your distributor, so you can get a bunch of weather data from the weather channel and you can build some AI solutions around that. But that's still only half the story. The data really drives everything in your organization. But it's not just the data, it's what you do with it, and I feel like at this point I'm turning into a storyteller. I should be wearing a cardigan sweater with suede patches and smoking a cigar, but I'll tell the story anyway. Part of this story didn't particularly age well, but you'll have to overlook that. You'll understand what I mean in a minute. One of the best use cases I've ever seen or heard of using your organization's data Cards with Kevin Spacey. That's the part that didn't particularly age well, but we'll overlook what happened there and we'll just talk about the show, which was a great show before his inner demons came out to the public. And I was actually at an IBM conference seven or eight years ago and he was the keynote speaker. It was fascinating, and he talked about how Netflix leveraged their viewer data to essentially build House of Cards in a lab. So what I mean by that is they look to see what type of television shows people were watching oh, political dramas, Great. They look to see who the actors were that were interested Robin Wright, Penn, Kevin Spacey, all the other people that were in that show Check, check, check. So they literally constructed this show out of data that they were able to glean from their viewers' habits and build entertainment around that. But they even took that one step further and they basically tailored the ads you saw for House of Cards to your viewing habits. So, in other words, if I had watched American Beauty half a dozen times maybe I did, maybe I didn't I would probably get an ad that featured Kevin Spacey If you had watched a show that featured Robin Wright Penn, you would probably get an ad that featured her. So they were taking all this and again, this was years ago, we've come much further than that so they were taking all this data and using it to drive outcomes in their business.

Speaker 1:

So I talk about AI and I say listen. The first thing you need to do is get a handle on your data, inventory, it, catalog it, understand how to get access to it, know who knows what it means. But just as important as that is determine what outcomes you want to drive with this data. So I'll give you an example and I'll ask a question that brings it close to home. We owned an MSP for 30 plus years and we dissected. We had so much data in that company and we dissected it seven ways, from Sunday upwards, frontwards, backwards, sideways to figure out what it told us. If I told you that I was able to get a handle on all of the data inside of chips when we were running the company, like, what type of outcomes could you have driven, as a business owner and a leader, with the breadth of data that we had inside that organization?

Speaker 2:

Yeah, you know it's funny because you and I belong to a group called True Profit Group for many, many years and you know it drove us to be more conscientious of what our data really told us. And, like, where is the? Where could it help our organization? Like, where is the where could it help our organization? So, just from, like the IT support perspective, you know we learned that when you know a ticket was touched more than once, right, you know. And if it was touched two or three times, forget about it. You know we were in jeopardy of churning clients, right, they were very unhappy with our support level and you know we were following this. We're trying to figure out, like, what's going on, that you know, really just getting the intelligence out of this information of how do we combat this churn. How do we combat? Because we know we also learned that you're going to lose about 12% of your clients or your revenue per year.

Speaker 2:

Yeah, and that was that was. There's really not a lot you can do about it. That's a good company, Right One percent a month. And it was because someone got bought. It was someone went out of business or whatever the case may be, or you know, maybe you had some poor results, but you try and limit that as much as possible. And we were able to hone in on that information, which is also kind of what started to drive us into this business. But we started to figure out that, you know, if tickets had too many touches, for example, we were in jeopardy of losing that client, that that was an unhappy client.

Speaker 1:

Yeah, you know it's funny like, and that's why I say it's important to do this exercise and understand what data drives your business Because, again, you can't have successful AI outcomes without data and if you can't get what you need out of that data and understand how it affects your company, then you're honestly just wasting the time. But the funny part is and I'll give a little bit more detail on this this was a data exercise that we had done at Chips with a bunch of other like-minded MSPs and we determined that the magic number, for whatever reason, was 1.3. So at a client when the average touches per ticket across the entire user base got above 1.3, it drove customer sat through the floor, which then drove customer churn through the roof. I hope I said those right, good, okay, so, but the funny part is when we started that exercise and I did it cause I'm a nerd and I just needed something to do with all this data that we had when we started this exercise we looked at every other metric. We looked at average cost, obviously. We looked at response time, we looked at resolution time, we looked at you name it to customer churn, until we stumbled across the one that really fit the bill, which was, in this case, touches per ticket, whatever, however you want to call that, escalations, handoffs, whatever the case may be, that's what frustrates clients and that would draw customer churn.

Speaker 1:

But I implore you, when you do this, first I implore you to go to take off on this data exercise to understand what data you have, what you have access to and how it's valuable to your organization. But I equally implore you not to go in with any preconceived notions. I mean, it's fine to have them, but be open to changing your mind. Right? Don't assume you sell more hot chocolate because the weather's bad. Right, there's a funny old line in Cheers where Norm talks about how he drinks beer in the summertime because it's hot, and then Cliff asks him why is he drinking in the winter? Well, what else are you going to do with it? What else are you going to drink? So it's not always black and white, the correlations aren't always where you think they are, but it's important again, to do that exercise and do that investigations in your organization.

Speaker 2:

Yeah, david, I'm going to take this in a little slightly different slant here too, now, where you know we talk about, you know, ai and humans, and a lot of people come to us and say, hey, how can I get rid of people? Right, if you're a business owner, you know, some days you just you sit there up at night going, how can I get rid of people? You know. But then you realize that people also drive your business and success and everything else, and you know what we found is you know, first of all, this stuff right now, how it is, it enhances your people.

Speaker 2:

But the other part of it is, you know, we've seen it where it can, kind of. You know, we're going to enhance your data. Right, we're going to take away some menial tasks so people can actually do some things that are a little more important and where the AI can do some stuff that's a little bit more accurate. And I know, before we sold chips, that you would have innovation meetings with some of our clients on how they can better drive their data, drive information, how to get rid of some of these tasks that they had to do. Maybe you can give us a little bit of insight or discuss that a little bit here too.

Speaker 1:

Yeah, so I think we jumped the chasm way too quickly, from I need to figure out what AI is to I'm going to use it to replace half my people. It's a good theory? It's not a good theory, but it makes sense in thought but in practice in theory yes, but in practice it's obviously completely unrealistic.

Speaker 1:

But I think that we don't lend enough value to the human AI collaboration process. So what I mean by that is like it's a perfect example. Right, there are menial and you know menial is not the best word but there are tasks that don't require a ton of high level intelligence and thought and processing from a person. That can be automated by AI. Right, whether it is categorizing something, classifying something, extracting it. Right, if I was to give you let's say I gave you a legal brief to read and said can you please pull out the terms of this brief that will our contract? We use a contract. Please pull out the terms of this contract that our client needs to be worried about. It will take you some time and you have to have a legal mind to do it, and that's fine. But you can very easily train an AI to do a really good job at something like that if you specifically train it for that. But ideally, you want it to work hand in hand with a human being. Right, because you want it to work hand in hand with a human being, right, because you want it to be able to essentially go back and forth where the human can ask the AI questions right, what type of? What are the terms of this agreement? What's the indemnification clause, what is the non-compete, what is the non-disclosure? And when it spits back an answer, you can also then ask it to say well, how could we? Does that term match up with state law in Delaware or something? I don't know. I'm just making this up. I'm not. I don't have a legal background. But my point is the collaboration of that is important, and we went way too far to the kind of McDonald's fast food model where we don't need people taking orders anymore, we can have machines do it and we could automate the whole process. Doesn't work for McDonald's and it certainly doesn't work in our businesses with what we call knowledge workers. Right, find the areas in your organization that you can automate and optimize with some sort of machine assistance and let your people focus on the things that they are good at in a unique way so managing clients, understanding relationships, selling just human effort that require that human touch that we have not been able to automate yet, despite what Sam Altman says and thinks that human touch is still critical to success in business, but it works better if they work hand in hand with a machine and, quite frankly, it's also critical that they vet and QA that machine right.

Speaker 1:

So again, that example, I'm not going to go back into it, but that attorney example would have been fine if he had read the cases and said this is ridiculous, these are not precedents, I'm not going to turn this in. He could have fed the precedents into the system and said turn this into a brief, right. So that then goes from a major failure of someone just trying to shortcut their job and not qualifying the results to come out. You could have very easily flipped that around and said look up these two cases, and especially trained model, and help me write a brief based on these precedents. That would save him a ton of time. But it still requires the legal mind and legal expertise. So it's a bit of a subtle difference there. But I think that we're undervaluing that human AI collaboration. I know I've said that three times, but I'm really. I really believe in that.

Speaker 2:

Yeah, and you know to your point. We've kind of seen this right when you know classifying tickets, budgeting tickets, right, those are things that are not easy for someone who's not technical right, but someone who's like at level two. Sure, it's pretty simple for them, but do you really want them spending the time on classifying tickets or budgeting tickets? You have to know Right. So if you could train the AI model, you can get that done accurately, way more accurate than a human being. And by the way, the AI will do it every single time.

Speaker 1:

So it's funny. I think that's, I think that's interesting and I think you touched on a good point there. And again, I know not everyone on this call is an IT support or in this webinar is on IT support or in an MSP, but it's something we know well. So it's a good conversation. A good talking point is that a lot of companies that I've seen us included gave this task to someone non-technical because it seems like a low value task. Non-technical because it seems like a low value task, and maybe it is. It's critical.

Speaker 1:

I think classifying and budgeting tickets, again in IT support is crucial to the success of your support delivery organization. But I'll be honest, I don't want to pay someone $100,000 a year to read every ticket and say this is a Citrix issue, this is a password reset, this is a VPN. There's got to be much better use of that time. So what do we choose to do, to our own undoing sometimes, is give that to someone non-technical. The problem is they're not qualified to do that. So really, those are the areas that if you think someone, that if you think you can train some automation system to do that, you absolutely should, and focus on the areas that have high value to your organization.

Speaker 2:

So you know generative AI has been obviously pretty big now for the last 18 months. You know open AI gave some great exposure to this technology. Is there anything companies should be thinking about differently when deploying AI solutions, particularly generative AI, compared to traditional products and software solutions?

Speaker 1:

generative AI compared to traditional products and software solutions? So I'm going to answer that question. I'm going to interpret that question my own way and answer it in a couple of different ways, so kind of like you used to do when you were in college, when you didn't know the answer to a test and you basically wrote your own question and re-answered it.

Speaker 2:

I'm going to do that a little bit. What's that I said? I did do that once.

Speaker 1:

But I actually know the answer to this. But I'm going to answer your question, but I'm going to answer it in a couple of different ways. So you mentioned, obviously, that last year there was an explosion in this technology and a lot of excitement around what OpenAI was doing, and that's great, and I think 2023 will go down as the year of the awakening of AI, specifically generative AI, and where it really went mainstream. It got consumerized, it got easily consumable by users and businesses and the like. I think 2024 is going to be the theme of 2024 is going to be guardrails. It is going to be exactly what we talked about where understanding these outputs and making sure that they are grounded in truth and grounded in facts and that there is governance on top of them, and that there is governance on top of them. So, as you move forward this year, that is the sort of the mantra I want you to keep in the back of your head is governance, control and guardrails on the data that you are letting into and out of your organization. So I think that is crucial to the success of these. Again, we're risking things like data breaches. We're risking inefficient workers, like bad information making its way into your organization. People actually see now that that's possible, thank goodness. I mean, it took a year, but people are starting to see that that's possible. And I think guardrails are critical. And I think again, as you go to vendors and talk about solutions whether it's us or anyone else like I don't care if you've ever talked to us from a business standpoint just call me to BS or just ask for some advice. I'm happy to talk about it. When you go to your vendor and they talk about AI, ask them what they're using, ask them what guardrails in place, what controls, what governance, how are they making sure and vetting the data? That's the first piece. The other piece is more of an answer to your question, which is around rolling these systems out.

Speaker 1:

And there's this gentleman who I think is one of the foremost, in my mind, voices in generative AI. He's the dean of students at the NYU Stern Business School. His name's Connor Grennan, I believe it is, and he talks all the time about AI, specifically around generative AI, and he makes this really good analogy around deploying these systems inside your organization, and I'm going to steal it from him because I find it fascinating and I think it's something that you should all be thinking about. He calls it the treadmill effect. So traditionally, when you deploy software inside your organization, you do a bunch of things. You make a bunch of wacky videos that say here's how you use the system, and you shut off access to the old system. And you do training and you give a bunch of metrics and say you need to do this, this and this, and there's this sort of methodology for training people on how to use this new software and getting it deployed and they, quite frankly, have no choice because that's how they do their business. That is not what AI is in any way, shape or form.

Speaker 1:

Ai is a change of mindset and a change of behavior, and that's again. Think about a treadmill for a minute, right? So let's say I want to lose weight and I think the way I want to do it is by running on a treadmill. Right, and I probably should more than I do. And that's neither here nor there. I know how to run on a treadmill. I don't need to watch a video, no one needs to train me, no one needs to teach me how to use it. I also can run on a treadmill for five minutes. I'm not going to lose any weight running for five minutes and I'm going to stop after five minutes because it is mind-numbingly boring and I just don't. I have not changed my mindset to make this a critical part of my day-to-day routine, my day-to-day activity.

Speaker 1:

That is the way you need to think about AI software. It is a behavioral change for your organization. They have to start doing things differently. It's not just a different system, right? If you're using Microsoft Dynamics and you move to Salesforce, there's a different UI and you need to learn how it works and there's a bunch of things you need to go through and understand how workflows work and all that fun stuff. But at the end of the day, it's a CRM platform and you shut off the old one and you use the new one, you watch a video and you're good to go.

Speaker 1:

If you put generative AI or any AI platform in front of your users and you don't make them change behavior through things like shutting off access to legacy systems requiring searches or generations or whatever the case may be, you are never gonna get adoption. Humans are inherently lazy. That's just all there is to it. It's the reason I don't wake up in the morning and go to the gym every day. It's the reason I don't run on the treadmill for an hour. I want immediate results. I want immediate feedback. I am a lazy animal. If I don't change my behavior, I am never going to get the results from that opportunity, which, in this case, is AI or opportunity, which in this case is AI or, in my example, is exercise Boy.

Speaker 2:

I thought I was the only one. I'm glad to hear there's someone who makes excuses in the morning not to exercise. We've also seen in the last probably six to nine months or maybe a little bit longer some of these Fortune 500 companies put in these AI use policies, right. So they've blocked things like open AI because they were afraid of, you know, some. You know A backlash B. They can be liable for things that it's returning, you know, for several different reasons and it may not be good for their. You know they could leak customer data, all these type of things, right, so they put in all this AI use policy. Should every company create a internal AI use policy for their employees to buy by?

Speaker 1:

I think it's crucial. I think internal and external, quite frankly. So I think it's crucial that you put policies in place in your organization, just like you do with everything else right? We have acceptable use policies for email. We have acceptable use policies for Internet access, right Like when we had our company, when we were all under the same roof. We had to block streaming services because we had employees that were streaming the World Cup. We had customers fire us because we had employees checking fantasy football scores on site at their offices. So we have to put these policies in place, and I think it's equally important with AI, again, both internally and externally. So you need to put these policies in place to say what people are allowed to do, how they're allowed to do it, what platforms they're allowed to use. All of that. And it's not.

Speaker 1:

I'm not a lawyer. I'm not giving legal advice. There are people out there that know a lot about this stuff. There are some templates available there's. You know, I'm a part of CompTIA's AI Advisory Council. We're putting some prescriptive guidance together for people. So, there, you definitely should be doing that, but, at the same token, you should be relying on that from your vendors as well. So if you have a vendor that has AI baked into their solution, you need their policies and their procedures as well, like the last thing you want to do is be liable for something that your system spits out because of a platform you bought from a third party vendor.

Speaker 1:

I do this when I, when I do these live presentations where we have actually slides, I talk about this really funny example. So there's this product called AI Lead L-E-D-E, and the lead for those of you that don't know journalism is like the first line of a story. So if someone says don't bury the lead, it means put in the first line of the story what actually happened, as opposed to making someone dig into it. So this platform, ai Lead, is for small businesses, small newspapers rather, that want to report news stories but don't have enough people, enough journalists, to write up the news story. So basically, you feed it a bunch of details and it spits out a story. It's interesting technology. It's pretty cool the way it works.

Speaker 1:

Someone found an example it wasn't me, I wish you could take credit for it where this software or this newspaper in Ohio somewhere, I think in Dayton Ohio went and reported on a football game between two Ohio high schools and the headline was something along the lines of that. It was an athletic competition of the, a close encounter of the athletic kind. So basically it was using generative AI and it got too much Star Wars in its training, so it called this game between two high schools a generative an. It got too much Star Wars in its training, so it called this game between two high schools a generative an athletic. Sorry, a close encounter of the athletic kind. But what was funny about it was the person that originally found this went and Googled that phrase and found it on hundreds of other articles on small local newspapers writing about high school and college sports.

Speaker 1:

So my point is like if you buy this product from someone, do you really want to have it generate the same headline for you that everyone else does, especially one that sounds as ridiculous as that? Let's look at the MSP space, for example. Right, and I don't know of anyone that's doing this, so I can say this freely If you go to work with a marketing vendor that specializes in MSPs and you say, help me write content for my website, and they're full upfront disclosed to you that they use AI to write their content, well what guarantees do you have that the same content doesn't get repurposed and refed out into a bunch of different places, a bunch of different MSPs, especially if they're giving you the platform and not giving you the person. So I think that's important from policies and procedures. I've rambled off of the question now and I apologize for that, but yeah, it is certainly important to put that stuff into place. I think it's one of the first things you should be doing.

Speaker 2:

So, talking about expanding on a question, this next one we can talk about for the full hour if we wanted to, but we probably only have about five to seven minutes. So it's real important. You know, last year 2023, we saw generative AI, you know, really explode onto the scene right with the commercialization of open AI. What do you see in 2024?

Speaker 1:

Yeah. So I mean, I talked a bunch about this already. I think 2024 is a couple of things. The first one I'll reiterate is that it's the year of the guardrail, so very critical that governance starts to become an overlay on all of this. I would not do anything AI-based without putting some sort of governance in charge of it. The other piece I would like to talk about, or I would mention, is that I think 2024 is the year that AI becomes multimodal, and what I mean by that is it's got to become a seamless transition between different modalities of communication.

Speaker 1:

Whether it is I ask you a question, you generate text. Well, what I would like you to do when I ask you a question is to generate a text response, but also maybe build a diagram with an image or some sort of an animation that shows how something works right. So if I have an AI model around auto repair and I have to replace a carburetor on my 67 Corvette, I may want an animated video of how that looks. I may want images of it. I just don't want line-by-line instructions, so to speak. So I really think that 2024 is the year that data becomes multimodal and critical, and I think that a lot of the companies we see are starting to do that right. So what OpenAI is doing with their Sora piece, where you can just give it a one-minute, a one-sentence explanation, or come out with a one-minute video, that's cool, but I think it just needs to get a little bit more seamless.

Speaker 1:

And again, like I said, ai governance is crucial. You need to look, all you need to do is look at what governance governments rather sorry and legal bodies, legislative bodies are doing to control this. So they're looking at everything from copyright. Openai is being sued by everyone, from the New York Times to Sarah Silverman, because they train their model on what those people consider copyrighted data. They train their model on what they can what those people consider copyrighted data, um. So really, uh, transparency on the content of these models, governance on what gets output, and multimodal where you can go, where you can seamlessly jump back and forth between um everything from text to speech, to video, and and so on and so forth. Yeah, you.

Speaker 2:

You know it's funny. Today the governor of Tennessee signed into law how he's protecting musicians, right? No surprise, tennessee. Protecting musicians from AI, right, and how their intellectual property is to be, you know, not absorbed by AI, making it, you know, impossible for them to make future royalties and everything else. So you see this more and more. You know from a local government. We've seen it in the EU, which is a little bit more further, along with AI governance, right, and you know, people just don't want to be held liable for that black box potentially. They want to make sure that they are in complete compliance and when you're kind of buying these platforms, databases, services, amazon, whatever you don't want to have to worry about the AI governance.

Speaker 1:

And that's what we're kind of in the wild wild west of. Fundamentally, that needs to be baked into the product, and the only way that's going to happen is if everyone starts pushing back on these vendors to ask questions about governance and how they are leveraging it. I do want to ask a question.

Speaker 2:

Last thing it sounds very similar to, like you know, cybersecurity, where now you have to send out surveys to your vendors about cybersecurity. I'm sure the same thing is going to be coming down.

Speaker 1:

Yeah. So I like to say we joke about this, because back in the day when cyber breaches started to become a reality and these insurance companies started underwriting cybersecurity policies, you and I used to joke that these insurance companies had no idea what they were underwriting, what they were insuring, and, sure enough, they all got slaughtered on paying out cyber liability claims. I can assure you that's not going to happen again. Right, they are putting these controls into your policies. They're going to ask questions about generative AI. They're going to ask what models that you use. These other companies get raked across the coals again on claims that jump from whether it's copyright infringement to liability claims, to whatever that come out of generative AI, but along the same lines. I know we have some other questions and we're wrapping up, but there's a Kushi. I hope I got that name right. I apologize if I did not asked a question in chat. That I think is interesting.

Speaker 1:

It's a point I want to touch on and the question is what potential societal impacts do you foresee as LMs become more ubiquitous in various industries and applications? And the first thing that jumps to mind for me, which I find very interesting, is that, as we sit here today, I think the estimate is something like 25% of all the content in the world was created by generative AI. Two years ago, that would have been less than 1%. I forget the statistic. I saw when it's going to be like 50% or 75% or whatever. But the bottom line is so much of the content we see today is created by generative AI. I see it every day in my life. I scroll my LinkedIn feed and obviously you can tell when an image is generated by generative AI. I see it every day in my life. I scroll my LinkedIn feed and I can. Very obviously you can tell when an image is generated by, you know, deep mind or any sort of generative image creation. If you start typing a post in LinkedIn, it asks you if you want generative AI to clean it up or to create it for you.

Speaker 1:

Like everything you do, this stuff is baked into and what's happening is it's becoming a loop now where we are using generative AI to train large language models, and that is a very dangerous precedent. I'm going to tell you why Because everything starts to sort of flatline when that happens. One of the values of training these large language models on actual human generated content is the ebb and flow of it the positives, the negatives, the intricacies, the weirdness, like all the outliers are actually really valuable for the large language model, believe it or not. So when we start to flatline this, it kind of starts out here and sort of streamlines, and now everything is one flat line, and that concerns me significantly. Around the future of what these models look like, I think what will happen is that these models will start to essentially go off the rails and become somewhat useless, and we'll have to take a few steps back and rethink them, which is why I think, personally, that these smaller models will be much more valuable in the long run than these huge large language models, because they will be custom built for specific tasks. I'll give you a great example.

Speaker 1:

This week, anthropic who's one of the other really big players in the space. They announced that their Claude 3 model actually beat OpenAI's GPT-4 and a bunch of benchmarks. All right, great. They're both trillions of parameters. Who cares? What was more interesting, though, is that their smallest model, which is called something with a Q I forget what it's called their smallest model was outperforming their largest model in specific tasks. My point is and again that circles back to what we said earlier I think these smaller models that are custom built, with a lot less parameters, that aren't retrained on the content they generate, will start to become much more valuable and until we rein those in and start to leverage those, we're going down a bit of a dangerous path. Hope you enjoyed that conversation. Thank you so much for listening, tuning in as always. Check us out at crushbankcom, follow us on LinkedIn at CrushBank and keep it tuned here for future episodes of our podcast. Thanks again. This has been the CrushBank AI for MSPs podcast. Thank you.

Resetting Expectations Around AI
Leveraging Large Language Models for AI
Leveraging Data for AI Success
Human-Ai Collaboration in Business
AI Deployment and Governance Policies
AI Governance and Communication Future
CrushBank AI for MSPs Podcast