CrushBank - AI for MSPs

Lessons for MSPs from Google's Cautionary Tale

February 28, 2024 CrushBank Season 1 Episode 4
CrushBank - AI for MSPs
Lessons for MSPs from Google's Cautionary Tale
Show Notes Transcript Chapter Markers

Discover the critical lessons from Google's AI misadventures with their Gemini service. In a week that shook the tech giant to its core, we unravel the PR nightmares and technical missteps that led to a hasty retreat and a hit on their stock value. We're pulling back the curtain on the governance and oversight—or lack thereof—that businesses must implement when venturing into the frontier of AI development. Get ready to find out how Managed Service Providers can apply these insights to avoid similar pitfalls.

Speaker 1:

So if you have been paying attention to the news, you probably know it's been a pretty rough week for Google. On this episode, we're going to talk about what happened and I'm going to explain how something back from the 1960s might be the underlying problem Google is facing. We'll also get into why it's important to understand these issues and what MSPs can learn from them. My name is David Tan and you're listening to the Crush Bank AI for MSPs podcast. So, yeah, if you haven't been paying attention, this week is what I would consider a bad week for Google and, quite frankly, it all stems around their Gemini AI service. So let's just take a step back for a minute and talk a little bit about what Gemini is and the problems they had this week, and then we'll dive into, I think, some more interesting conversation around it. So, like I said, gemini is Google's AI service. When it was initially launched back in November December of 2022, it was known as Google Bard. It was released almost immediately, literally within a week or so after Open AI announced their chat GPT service, ai service, and that's important and we're going to talk about that as part of this discussion. But that's what Gemini is. It has evolved over the years. A couple of months ago, google released a video showcasing the capabilities of Gemini, which were really cool and interesting. The problem was the video was all staged. It wasn't actual output that they were getting from the Gemini service and they were a little bit forthright with this. They kind of made it known, but not really. It wasn't that obvious. If you looked, if you dug, you could figure it out, but it was just another form of bad publicity. Google's gotten around their AI service in the last 12 to 18 months or so, but anyway, last week they made it live so that people could start playing around with it in two areas in particular the chatbot, competitor to chat GPT, and the image generation, also a function that Open AI does or offers, and there are a bunch of other companies that provide generative image capabilities. But Google obviously one of the biggest players in the tech space wanted to have their model. They wanted to release their service, rather, and show how it was among the best and most powerful and capable models, and what happened was almost immediately. What tends to happen with these things is people try to break them, or even if they don't try to break them, they use them and they power away at them and try and figure them out and, I'll be honest, I'm a little reluctant to be too detailed about some of the issues they had, but I think it's important for this conversation. So I apologize in advance if anything that comes up in the next minute or so is offensive. I am merely just repeating what Google's Gemini AI model and AI service was doing.

Speaker 1:

So first was sort of the text piece of it. So the chatbot people were asking questions around depictions of historical characters, and the most widely publicized mistake was the chatbot refusing to determine who had a more negative impact on history between historical figures, adolf Hitler and Elon Musk. Now, I don't care what you think about Elon Musk. Obviously that's a really tough comparison. It's also not necessarily a place where an AI model or an AI service should opine on the degree of evil that a person is, but certainly when you put Adolf Hitler in a conversation, it makes it difficult to not have a little bit of a clear and concise answer. That's more of an example of people trying to break it, like I said, but it still was inadequate. The answers that came back with were woefully lacking and certainly underwhelming.

Speaker 1:

The other piece, which was a little bit even more highly publicized and potentially worse in some ways considering not necessarily worse, but, depending how you think about it a little bit more egregious to an extent. And that was the model flat-out refused to create images with white people in it. So, in other words, users were asking for images of our founding fathers, and they were all different ethnicities and different minorities, even the founders of the company, sergey Brin, it was being portrayed as Asian, and Larry Page same thing. So people were calling it woke, they were calling it the liberal image bot and liberal chat bot. But it was again egregious, completely hallucinating who these people were, what they look like, what the images would look like.

Speaker 1:

Now, in fairness and again this is sensitive topic, so I'm going to tread lightly as best I can Google is hyper-sensitive to this because of another issue they had months ago where their visual recognition model was tagging black people in images as gorillas. Obviously, that is horrifying on so many different levels, but I am sure Google overcorrected into the spin, so to speak, or into the skid, so to speak, and probably put too much training in the other way. It goes to show just sort of the dangers of these models and what can happen and how sideways. Things can go quite frankly. Clearly Google had no oversight, no governance. I don't know how things like this make it out of testing, make it out of the lab. That's a whole different story, different conversation.

Speaker 1:

But just to kind of make the week rather full circle and talk a little bit about this, obviously this is bad PR. This doesn't look good for Google, for their capabilities, for their future product offerings and, as you would expect, for a large public company. They felt it in the stock price, as is want to happen in cases like this. For the week, I believe they lost something like $90 billion of market cap. The stocks were down close to 5%, hit the low for the year. It's only the as I record this, the only the end of February. So being the low for the year probably isn't the most telling statistic, but it is when you consider the tear the market's been on through 2024 so far. So really, just again, bad week. What Google ended up doing, to their credit, was they almost immediately pulled it off, pulled the Gemini services offline, made it unavailable, and the CEO of the company came out in the last few days and basically said that they are working around the clock. Sunar, pasha, pasha, pasha, rather sorry, came out and said they are working around the clock to fix the problems and they will keep updating. He's just been transparent. They have been communicating, which is the best you can ask for, but again, still not a great look, not a great situation for Google, just for even from a PR standpoint.

Speaker 1:

I read I'm actually going to read a quote here from an adolescent loop capital, who wrote this is a meaningful blunder in the PR battle surrounding generative AI and further suggests that Google is trailing and mis-executing in a fast moving and high stakes space. The reason I read that quote is because it feeds into the other thing I want to talk about it, what I really want to talk about today and, as I sort of teased at the top, a lot of what we are seeing from not just Google right, I'm not just trying to pick on them but from a lot of these software companies, particularly tech companies around. Ai dates back to something from the late 1960s and it's not necessarily a technology that it dates back to. Obviously, this is all relatively new technology, despite the fact that AI has been around since the 50s and 60s. That's not what we're talking about here, what I want to talk about, and I want to start talking about this is by telling you about an engineer, a British civil engineer, that lived in the second half of the 20th century. He was born in 1939. He actually just recently died, in February of 2022.

Speaker 1:

His name was Dr Martin Barnes, and Dr Barnes is credited with creating what we would almost think of as modern product project management. He was one of the fathers of the science and study of product management and really, if you're in software, his work and his theories and philosophies almost dictate the role and function of a job such as product management in, particularly, software development. So what, what doctor? What dr Barnes rather worked on was this concept known as the iron triangle. So, without the ability to be graphical here, I will try to explain what that is and again, I'll explain why it's interesting and relevant. So, if you picture a triangle for a moment With the three corners, what dr Barnes said was that product management is essentially made up of three different components resources, time and scope. And I'm gonna I'm gonna focus specifically on Software development, because what he said he said it in the greater scheme of product met project management in general, but I'm gonna talk about in software development.

Speaker 1:

What he said was that if you want to be able to deliver great software, you have to be able to move one of those three corners of the triangle. So again, time, scope and resources. If you can't move one of the three, you can't deliver great software, you can't deliver great outcomes. I'm gonna explain why this is a problem Specifically for a company like Google in the AI space playing catch-up. So first let's talk about scope.

Speaker 1:

So when Bard was first released back in I think it was late November, early December of 2022 it was a direct reaction to what open AI had released with their chat GPT and if you don't remember, I'll tell you the story really briefly they went and did a live demonstration of it, you know again, two to three days after it was released and the third or fourth question that someone asked, it hallucinated an answer. Now we've come to know that hallucinations are a real problem with these gendered AI models, specifically things like open AI and chat GPT, and there are ways around that. There are companies doing some work around governance and and Ensuring that the answers are accurate and things like that, and IBM is really a leader in that space. I'm sure you've heard me talk about that, but if you just have a large language model that is meant to sound like a person, which is what these chatbots do, even though they're trained on information. They are not meant to be accurate. They're not required to be accurate. I should say they are required to sound like a person, and that was Google was sort of the first one to demonstrate that publicly. It was certainly embarrassing for them at the time and they were playing catch-up and in the time since that, so in the 14 to 16 months since then, they continue to play catch-up.

Speaker 1:

So if we missed the announcement, about two weeks ago, open AI announced Sora, their generative video service. They didn't release it for people to play with it. They released examples of it. What it does if you're not familiar, briefly is it's another large language model, but this time it takes a one sentence description and creates a video of it. So the examples they showed were things like Dogs were playing in the snow, william Mammoth's roaming the land, countryside, things like that. So really Cool high-res videos that were created from just a one line sentence.

Speaker 1:

And the possibilities of this are interesting and we'll probably in the future, come back and talk a lot about the what this means, what the implications are, you know, around the entertainment space and just, and business in general and the positives and negatives of it. I have my thoughts on it, but that's not what this is about. This is about the iron triangle specifically. So what happened was open air release that and Google again continues to try and play catch-up. So they potentially rushed out their release of Gemini or the latest release of Gemini.

Speaker 1:

So again, I mentioned scope. So if you think about a company like Google that's playing catch up to open AI, in essence their scope is locked and what I mean by that is the features and functions and requirements of their product are being driven by their competitors. So they're constantly one to two steps behind and they're trying to come out with software that does what the competitors out there do. So they can't go and say I'm going to build 30% of this, I'm going to build 40% of this. They have to match feature for feature as best they can so they can move that corner of the triangle. Scope is locked for them. Again, based on the situation that they find themselves in, that can certainly change, but that's the case for right now.

Speaker 1:

So the second one is resources. When you think about resources, you can think of it a couple of different ways. Obviously, the first thing you think about is money, right? So Google's got more money than they know what to do with. Probably they could throw hundreds of millions, even billions of dollars at this problem, and they probably do. I am sure they spend a tremendous amount of money building and developing these platforms. The problem isn't with the money. The problem is the resource constraints in two different ways. One is a physical hardware constraint. I've mentioned it here in the past and we'll probably talk more about it at some point in the future.

Speaker 1:

But all of this generative AI requires what we call GPU, graphical processing units, which are the things the chips that go on video cards. Basically, that's where they got their start and that's why NVIDIA is the leader in this space. Nvidia originally pioneered this to make gaming more efficient on PCs, because gaming requires heavy mathematical calculations. It then grew into crypto mining when you're mining for Bitcoin, specifically, or any cryptocurrency. Quite frankly, again, it requires complex mathematical calculations, and GPUs are just better at that than CPUs are. So you really need GPUs. And then AI, the same thing. All of this stuff underlying it is just complex mathematical calculations, so it requires GPU, and the simple fact is there are just not enough GPUs in the world and NVIDIA as you've seen the stock price you know what's happening to them as a company. They are doing the best they can to pump it out. Other people are trying to create their own chips. If you missed it, sam Altman, the CEO of OpenAI, announced, probably two to three weeks ago, that he wanted to raise $7 trillion to build his own GPU chips. I have thoughts about that, but we'll save that for another day.

Speaker 1:

But that is a constraining factor on the growth and development of AI, to the extent that there is just not enough capacity for everything we want to do. So that's one piece of the resource constraint. The other piece of the resource constraint is just the people involved. These are not just typical software developers that you can just get to write code. You can't post a wanted ad on Indeed for someone that understands large language models around graphic generation. These are incredibly rare, incredibly highly educated, brilliant mathematicians and data scientists, and just the level of people that you need to work on. This is almost inconceivable, and there's just not enough of them again, quite frankly. So one first lesson learned if you have a child or someone young in your life that's looking to determine what they want to do with their lives.

Speaker 1:

I highly recommend going into studying this type of stuff Science background, computer programming background, data science it is the wave of the future. Obviously that kind of goes what I was saying at this point. But there's just not enough people. So Google can do whatever they try. They can try to do whatever they want. They can try and steal engineers from other AI companies. They can try to train more of them up, they can try to make them more proficient and efficient. I'm not really sure, but the simple fact of the matter is that there's just not enough people to do all this work that needs to be done. So, in essence, the resource corner of the triangle is locked. So now we've got scope locked and we've got resources locked. So the other variable is time, and time certainly isn't locked for Google, and I put that in air quotes, but as we've seen from their pattern or behavior, it kind of is, and what I mean by that is they released the Bard public demo again a week after OpenAI announced to GBT. They released Gemini into the world a week or so after OpenAI announced Sora. So they are very much making these releases and these announcements based on factors outside of their own control.

Speaker 1:

What other companies are doing again, whether it's OpenAI or Microsoft, who is obviously kind of one of the same, although Microsoft made some interesting investments this week in a company called Mistral. Again, there are all the tech giants Amazon and IBM I mentioned, and Anthropic, and there's just a lot of very big, very wealthy tech companies making major strides and major announcements in this space and Google's terrified, quite frankly, to fall behind because they know that, in their mind, this threatens their business model. Right, let's put it that way. People do believe that an intelligent, large language model based chatbot will potentially replace things like search, and search obviously drives their business to a very large extent. Obviously, they have other revenue streams, but, like Google's feeling the pressure there's no doubt about it as much pressure as one of the magnificent seven tech companies can feel, they are feeling the pressure to stay up and catch up and potentially even move ahead of OpenAI.

Speaker 1:

So essentially, what we have now is we have the three corners of the triangle locked again, mostly outside of Google's control, so they are just plain releasing bad software. There's no better way to put it, there's just really it's a bad look for Google in a lot of ways. They probably have to take a step back and decide how they wanna deal with this. Are they willing to fall behind? Are they willing to diverge with what they're developing? Are they gonna go crazy and acquire somebody? They made an announcement last week or this past week that they're acquiring, or I should say they're licensing, reddit's data for something like $60 million, which kind of goes hand in hand with Reddit announcing an IPO, which is another interesting conversation. But my point is it's an aggressive move because they wanna license it for generative AI models. So those are the type of things they're gonna need to do in areas where they just can't keep up, because the simple fact of the matter is that Google can't continue to create this flawed software and release it to the world, because people will. They've already lost faith.

Speaker 1:

I don't know when Gemini gets re-released, whenever, that is like I said, they're working around the clock. Let's say, two weeks from now, they announced that there's a new Gemini model out. How many people are going to allow their businesses to start leveraging it immediately without testing it, without QA'ing it, without beating it up and trying to break it? Quite frankly, I certainly wouldn't, and I know most people I know wouldn't. So you know, google's put themselves in a pretty precarious situation where they have to start to regain the trust of the public if they wanna keep doing what they're doing, if they wanna compete in this space. And that's where it gets kind of interesting to circle back a little bit and bring it home to talk a little bit about MSPs, right? So I said a couple of things. Like they have fallen behind and this is the type of thing that happens Now.

Speaker 1:

This is unique for a couple of reasons, so it's not uncommon in this type of scenario in a software development world where you have a competitor that's a leader. I'm just gonna make one up. I don't necessarily have details around this, but let's talk about the ERP space, right? So ERP is constantly it's changing over time who the leader is, but obviously for a long time we'll just say PeopleSoft was the leader right, and I am sure that there was a lot of this type of thing where JD Edwards and Oracle and the other competitors in the ERP space were trying to create software that just kept up with PeopleSoft when they were the leader right, kept up with their features and functionality, and then people started doing it differently. Salesforce came out, workday came out, so there was a bit of a change in that space. But the difference there, and the reason this one is unique, is that the resource constraint was probably still there, from a money standpoint right, not that Oracle doesn't have plenty of money, but Oracle is not gonna throw $5 billion into revamping their ERP platform just to keep up with PeopleSoft. Or 10 years ago or 15 years ago they wouldn't have, but they could have, and the constraints around qualified engineers and, in this case, around GPUs, were not there. So this is unique in that way, but it's not a terribly uncommon story. It is the type of thing that happens.

Speaker 1:

And again, as a consumer and I put that in Eric Will, which is a consumer of this technology or any technology quite frankly, it's important to understand when you're working with someone that's potentially not a leader in a space, how are they making up for the fact that they have less flexibility on what they're designing and what they're developing? So it's just kind of a really important lesson to learn, and it also I think it speaks volumes to the fact that you really need to understand who and what your vendors are using when they build AI solutions. So what do I mean by that? Most people are not going to go to Gemini. Most businesses should say I'm not going to go to Gemini and get the APIs and start writing code to put this into an application. Most of us are going to rely on a vendor that is using this underlying technology in a smart and intelligent way, right?

Speaker 1:

So let's say, for example, you will stick with my ERP solution. Let's say, for example, that you have an ERP that does some generative AI around marketing. Well, actually we'll say more of a CRM. I know that can be a little bit interchangeable. But let's say you have a CRM that does some generative AI around marketing, whereas you go and you click a bunch of buttons in your CRM and you say, hey, I want to put together a product announcement for this product and I want to send it out to all these clients. Well, that is a really good use case for generative AI, because what'll happen ideally is it will generate the letter, it will take some of your specs, it will pull information about your clients again, assuming the software is built properly and it will put together a fairly personalized letter that it will get sent out.

Speaker 1:

Or let's say, in the same token. You want to put together a presentation around that, and it can create the content of it, but it can also create the images and things like that. Well, you could manually go and do all of those steps. Right, you could go to we'll just use Gemini for now. You can go to Gemini and you could say I need this letter written and you could prompt it with the information you need and it would put that together and it would spit it out for you. You could also go and say I need these images and it would create those images for you. But most likely you're going to do that as part of another software system or platform that you already have in place and you may not know what that underlying technology is. That's really my point. So it's critical to ask these questions. I kind of liken it to. I used this example recently and I think it's kind of interesting that it sort of should strike a chord.

Speaker 1:

So back in the day, when I was still the CTO at an MSP and I would work with clients and I would help them evaluate software products for their use anytime, something was built on a database, my very first question would be what's the database it's using right? Is it using SQL servers? Is it using MySQL? Is it using you know, I was doing this a long time ago is it using FoxPro or Microsoft Access? Because that information became really important in my decision-making factor. Right, if someone was using and Microsoft Access database to build an enterprise application first, god help them. But even if they were doing that, I was gonna look down on that significantly and I certainly would not have recommended that to my client. Many times over the years I talked to my client out of buying what would have been enterprise software for them Again, we're talking small businesses mostly, but would have been, you know, a line of business enterprise software application for them because the underlying technology just was not any good.

Speaker 1:

We're at a point now, quite frankly, where I generally don't ask those questions anymore because it's database has become fairly mainstream and there's not as much of a disparity between different ones. Like, certainly I would like to know if a database application I was using was run on Microsoft SQL, because this way you know, if I could get access to it I understand SQL really well, I could tune it, I can query it directly. I can maybe do some funny things because of my background and understanding of database and software development, but beyond that, really I don't care if they're using you know. I care if it's maybe SQL or no SQL, right. Are you using SQL or using Oracle or using MongoDB, right? Those are interesting conversations, but I'm not necessarily I don't care about the speeds and feeds of that. We're not there.

Speaker 1:

With generative AI and underlying large language models, you very much need to understand what your vendors are using and why, and you need to ask questions and you need to be paying attention to this stuff because, again, I've seen way too many examples of vendors in all industries, but particularly in the management services space. Quite frankly, let's just be honest about it, I've seen way too many examples of software vendors shoving this generative AI functionality into a product without understanding it and knowing how it works and without putting controls in place that protect their clients. This could just as easily, again, be an open AI, and open AI is embedded into so many systems nowadays and if it starts hallucinating or if they do an update on the backend, the changes of model and something bad spits out. And you don't have an expert, they're just chaperoned it Like you can see how this could go off the rails very quickly. So, again, as a managed service provider, you not only have to worry about yourself, but obviously your customers as well. So I implore you to understand why things like this happen.

Speaker 1:

Hopefully, this podcast went a little bit of the way to help you understand that and just ask the right questions and, again, push back if a vendor doesn't want to be upfront and disclose what they're using. What do you have to hide? If you don't know what's training the model, if you don't know if your data is going back into the model, because this space is evolving so incredibly fast, like it's unlike anything I've ever seen, unlike anything any of us have ever seen Just the amount of advancement. Just think about what these companies, these large English models, have done since the day that OpenAI announced chat, gbt and a member of 2022. I know everyone thinks that's day zero for AI, artificial intelligence. It's not, but we'll use that as a kind of transition into the modern era, almost so to speak, because that's when it really seeped into the public consciousness and people saw the power and capabilities and, quite frankly, people started leveraging it and developers started putting it into their products. But it has developed so quickly that we are very much at risk of everything that we've worked for going off the rails just from a security and a reliability and a compliance and a government standpoint.

Speaker 1:

If we don't act deliberately and ask a lot of questions and stay on top of what we're doing and how this stuff works, we're opening ourselves up for a world of pain going forward. And again, google is going to be just fine. I am sure they will figure this out. Like I said, I know they're throwing tons of resources at it and I'm confident they'll get something figured out, but either way, they continue to play catch up. So I guess if I could give you one lesson, one takeaway from this anytime something new like this is released, always go into it with a healthy amount of skepticism and always assume there are gonna be problems at the beginning and they will be worked out and AI will continue to evolve and things will only get better. But for now, ask questions before it's too late. I hope you enjoyed this. I hope you learned something Once again. My name is David Tan. I am the CTO here at Crush Bank and this has been the Crush Bank AI for MSPs podcast. Until next time. See you soon, kaitlyn Cho.

Google's Troubles With Gemini AI
The Iron Triangle in Product Management
Understanding Vendor AI Technology Risks