Cloud Out Loud Podcast

Episode 35: AI Hype vs. Reality: Rethinking Build vs. Buy

Jon and Logan Gallagher

Send us a text

In this episode, Jon and Logan  explore whether generative AI fundamentally changes the build versus buy decision for businesses considering software solutions. They analyze recent claims about AI-powered cost savings and examine the underlying principles that should guide technology decisions.

  • How Logan is using  AI in his common tasks 
  • Looking at recent claims from companies like Klarna about replacing SaaS with AI-built software
  • Revisiting the fundamental truth that Organizations must understand their core expertise when making build vs buy decisions
  • "Vibe coding" with AI creates maintenance problems as context for coding decisions is lost
  • Compliance, regulation, and data residency considerations become more complex with AI
  • Cloud infrastructure reliability remains a critical concern for AI-dependent systems
  • AI is most effective as an assistant for routine tasks rather than replacing human judgment
  • Traditional business frameworks still apply when evaluating technology investments

Links:

Jon Gallagher on LinkedIn

Logan Gallagher on LinkedIn

The auto parts software mentioned:
Triad Systems became Activant

Aaron Levie - CEO of Box:
Box becoming an AI-First Company
Recommendations



If you enjoyed this episode, please tweet us at @cloudoutloudpod or email us at cloudoutloud@ndhsw.com. We hope to see you again next week for another episode of Cloud Out Loud.


Announcer:

Welcome to Cloud Out Loud podcast with your hosts John Gallagher and Logan Gallagher. Join these two skeptical enthusiasts or are they enthusiastic skeptics? As they talk to each other about the cloud out loud? These two gents are determined to stay focused on being lazy and cheap as they evaluate what's going on in the cloud, how it affects their projects and company cultures, and sometimes how it affects the world outside of computing infrastructure. Please remember that the opinions expressed here are solely those of the participants and not those of any cloud provider, software vendor or any other entity. As with everything in the software industry, your mileage may vary.

Jon Gallagher:

Welcome back everybody. It's been a minute. I'm John Gallagher.

Marian Gallagher:

And I'm Logan Gallagher.

Jon Gallagher:

And this is Cloud Out Loud. Following the tradition now of the AI edition, we're going to back, kind of take a step back, not step away, but step back in terms of this technology to talk about why and talk about how to make decisions about technology. Because, with the where we are in the hype cycle, with the incredible amount of money that's being thrown at AI, with the incredible amount of mind space that it has, we would like to be at least part of the discussion. We'd like to get in there and at least say here are some general approaches that you should be taking if you're considering using AI in your organization, in your business, and the flip side of that is, if you're the recipient of some of these decisions, these are the things that you should be thinking about in terms of hey, did they consider this? When you see this sort of thing coming down the road at you. So, Logan, start us off and set some context.

Marian Gallagher:

Sure. So we have, especially in the last six months, seen some pretty significant improvements in the tools that you can use to write code with generative AI. We've seen full IDE developer environments become really popular and possible, like Cursor and CLI tools, like Cloud Code and now Gemini's. So we've seen this proliferation of software tools to help software developers write software, and some of them I use every day. Some of them are very powerful and can help you more quickly write code. Often it's when I'm wanting to write a function and I know that I could go over and Google the documentation to grab the spec documents for what arguments to this function take, or I could, in a few keyboard strokes get that function generated with one of these tools.

Marian Gallagher:

We have seen a lot of discussion lately about how generative AI might be able to help organizations write more code in their companies and maybe even replace some of the SaaS software tools that they're paying for.

Marian Gallagher:

We've seen some fairly breathless coverage from places like Harvard Business Review and we'll link some articles in the show notes talking about how Gen AI could disrupt SaaS and maybe you could cancel your Salesforce subscription if you just use generative AI.

Marian Gallagher:

And we've seen some pretty strong claims by, oftentimes startups usually pre-IPO startups that are saying that they were able to achieve significant cost savings by using generative AI to build their own software in-house rather than paying for software licenses. In particular, there was one of the pay later financing startups Klarna that made those claims last year, but Klarna in particular this year has had to walk back those claims. Seems that the significant savings may not have been that significant, especially when they're about to have their IPO and we can actually get some more transparency into their financials. But that does lead us to an age old question in software and software development and management of build versus buy. When does it make sense to purchase software from a SaaS company? When does it make sense to write your own software in-house? And that is not a new question by any means. So we want to investigate today whether generative AI changes that calculus at all of build versus buy.

Jon Gallagher:

Build versus buy has been there since probably the beginning of business, but definitely has been a question that's had to be asked since general computing has been available to business. I always think of auto parts. When I was coming into software engineering, there was a dominant auto parts software company that they were 80% of the market share and that sounds like, okay, there's an opportunity there. Why should the industry put up with that? Until you realize what is the expertise within the auto parts retailing? The auto parts retailing is to have someone there who can answer the question about something that's going on with your car, whether it's the right windshield wipers, whether it's the right brake calipers. It's not to have an expertise in how big your database size should be or how many records you should keep or what kind of a data model you should have. Those are pretty generic things. So this company that had the software had the ability it was started by folks who were in the auto parts industry had the ability to capture that need, fulfill that need and allow people to get on with their work. So you didn't have to hire people to go back and count how many brake pads you had. You had the some ability not in. You obviously had to keep track of your inventory, but you had some ability to then refocus your people on their core expertise, and that's very old fashioned, but that's the perspective I think most people are missing when they get into the hype cycle of things like AI.

Jon Gallagher:

What is our core expertise and what do we do to enhance our ability of our folks, our resources, to focus on the core expertise and the rest of it? Strip away and buy it? The job of identifying what that is is the job of management. It's not the job of the people at the front end. They're too busy dealing with the customers. It's not the job of the people who are actually executing on the factory floor. They're too busy making sure the product's there. The reason why we have a management layer is to understand what the needs are and how to fulfill them. Again, this is a very old-fashioned perspective, but it comes right back to all of the issues that we see come up on things like AI. So you gave us an example of Klarna. You've got a couple others that folks have been talking about. Where people are just hey, ai is saving us so much money and doing all these wonderful things, but peeling back the covers doesn't seem to be that way.

Marian Gallagher:

Yeah. So Klarna in particular last year said that they were able to achieve significant cost savings by canceling some of their software licenses, writing their own software systems in-house with the help of generative AI. Some other examples of other companies that have attempted this apparently, Siemens, the industrial conglomerate, has said that they're using generative AI to write software rather than paying for ERP third-party software. Mayo Clinic says they're piloting some tools. We don't have a ton of visibility into the accuracy of these claims right now, but I would argue that even if we take at face value the claim that generative AI can help you write high quality code faster, that still does not change the calculus of when you would build versus when you would buy and when. That Karna CEO, who had made those bold claims last year, had to walk back those claims this year after the actual CEO of Salesforce was questioned about it on stage.

Marian Gallagher:

Benioff was asked what he thought about customers canceling their subscriptions and writing their own software in-house, and he was skeptical as well.

Marian Gallagher:

The Klarna CEO had to come out and say we developed our own technology stack in-house to replace Salesforce, but that was because we had a specific need in-house to pull in data from many different sources and make it accessible as a grounding source maybe for some additional AI tools they're attempting to build that this was not simply a strip out and drop and replace of their existing Salesforce ERP, and he had to admit that they are still paying for many, many other software products in-house at Karna.

Marian Gallagher:

They are absolutely not ripping everything out and building fresh with generative AI. I think that you can continue to take the evaluation framework of you should buy when you want to acquire process competence from a third-party software application, like the AutoParts software that was used by most companies in an industry that those companies did not all need to each individually reinvent the wheel and write their own AutoParts management software in-house. Everyone could use this common software. When they're onboarding a new employee, that employee is likely already familiar with the industry standard auto parts management software and you should build your own if you determine that you have some existing business processes that add value or represent a competitive edge that you want to capture in software, and that's when you would build. It sounds like that's what Karnan determined for themselves that they had some unique business processes that they wanted to create, and so they went with the build route.

Jon Gallagher:

That's exactly it, and so they went with the build route. That's exactly it. And to emphasize again you as a manager, you as the owner of a company, you as someone, as a decision-making in a company, need to know what's your special sauce, what distinguishes you from the people down the street and how are you going to build the systems and processes that support that special sauce. Not, we're going to do the newest thing because reason the other thing and this gets to us technology folks is it doesn't have to be a 100% build it yourself and 100% buy it and it's a black box.

Jon Gallagher:

Salesforce itself is an example of extensibility. Salesforce comes with processes and with expertise that you can stitch together. That may map into what your special sauce is. Again, you have to understand why are you in business and what distinguishes you and your processes, and are you supporting those processes to make things happen and are you supporting those processes to make things happen? So, rather than just dump everything out, we're going to fire up ChatGPT and take the code and compile it and make billions of dollars. Understanding why you're in business, what special thing you bring to business, business planning it isn't just disruption that's going to make you your money, it's being able to plan out and understand exactly what you bring to the marketplace that's going to make you your money.

Marian Gallagher:

Another thing that a third-party software might offer that generally I might run afoul of is regulation and compliance. I think a big reason why the Klarna folks had to come out and be a little more transparent about what they'd actually built was because, technically, they're a bank, and so they had to make sure everyone knew that, even though they were building something internally with generative AI, that they were still making sure that any financial data was in a secure database, and so they actually had to be a little more transparent that they're using the database Neo4j, that they're using these database engines that can be compliant with banking regulation. When you're going with the build route, you might be buying more than just the software and processes. You might also be buying compliance with regulations in your industry.

Jon Gallagher:

It's much easier to face an auditor when that auditor is familiar with the software package that you're using and, in particular, as we deal with compliance with privacy HIPAA here in the US, gdpr, company-specific privacy laws you have to make guarantees about how and where the data is being stored and how it's being processed, which you may not be able to comply with. You may not have the knowledge base to comply with, to build on your own, but if you are using package XYZ, which everyone in the industry uses, then you inherit those security controls. You can send a BA, yes, and you're that further along the road of being able to operate safely and legally in the industry that you've chosen.

Marian Gallagher:

And lastly, there's long-term support. Whenever you write new software, you're taking on the responsibility of maintaining that software. You are taking on potentially having to fix and patch and debug the software over time and maybe the person that wrote that software leaves the organization and you're needing to figure out what they wrote, why they wrote it. Sure, a generative ai model might help you interpret the functions in that code base, but a generative ai model isn't going to be able to tell you why that software developer made that decision, and that generative AI model probably does not have the entire context of the application software in its mind the same way that the software developer did who wrote it. So while generative AI may be helpful in long-term support for software, you are always, when you build your own, taking on that additional responsibility debt in many senses, technical debt, maybe even financial debt of maintaining that software over time.

Jon Gallagher:

Another thing that this discussion about AI and hype around AI has seemed to have done is create the impression that it's become easier to create these systems, it has become easier to prototype these systems, it's been easier to put something together and show it off. Show it off. But the getting beyond the prototype phase, getting into the engineering phase, where people start to do things that being called vibe coding, you start getting into very, very dangerous territory. You start to, from my perspective, get into the worst of all worlds of being in a maintenance mode because vibe coding the idea of I'll present a problem to this, to this gen AI. The gen AI will give me a coding framework. I'll run into a bug, I'll submit the bug and we'll we'll iterate with each other. You, as the engineer, are following the path that the gen AI is creating for you. No one's retaining context. So if you and we're going to abuse the name Fred today, so if there's any Fred's listening, we're very sorry.

Jon Gallagher:

Back when Fred was writing a COBOL program, fred had to keep everything in his head. If Fred left the company, you were in a world of hurt. So Fred really should have documented and such. But you can still go to Fred If James sorry James is now vibe coding, where is the complete context of this? James knows the prompts that James used and iterated with, but what James knows is the problems that the program ran into and the responses the Gen AI gave back. So quad code said to do this, or Yama said to do that, or Copilot said to do the other thing.

Jon Gallagher:

Why there's no context that comes back with oh okay, I don't like using constants or globals, I'm going to pull this global down into this function here to make sure it stays within this context.

Jon Gallagher:

And someone comes along and says where the heck did the database name go? So you don't have a consistency. You may try and enforce some sort of coding policy, but at best that is where the tabs go and how to capitalize your variable names. It's not how do you approach a problem and break it down. You're depending on the Gen AI to break that down and you're having to get into the Gen AI's context to understand why the code was presented that way. So, whereas Fred might have been able to get back and understand why things happened, it's impossible for the Gen AI to say oh yeah, yeah, I've done this three or four times, so I decided to do the other thing or better yet, why they chose not to go down some path which may be just as crucial, why we did not decide to break up these functions into two separate classes, why we did not decide to call this external API synchronously.

Marian Gallagher:

We can retain some of that context. Maybe sometimes we have forgotten that context as well. We've all been in a situation where we look at code we wrote six months ago and can't quite remember why we made the decisions we made. But we really don't have that context when we're heavily relying on a generative AI.

Jon Gallagher:

Yeah. So where we are in the conversation right now is we start off like build versus buy. Fundamentally, you have to understand why you're making the decision, and the decision is build versus buy, whether you are deciding to buy a software package or to code it yourself or to use an intermediate AI-based product to help you. Another iteration of this is that you have to be able to trust what this is producing for you, and we've all heard the horror stories, hopefully by now, about hallucinations and such from Gen AI. But let's talk more fundamentally and bring the cloud back into this context. If you are depending on this entity, this Gen AI entity, to code for you, to support you, to understand what's happening, how reliable is it? How present is it going to be in extremis? How many nines of availability is it promising you? We recently went through an outage at Google and that revealed, first of all revealed, a lot of people I didn't know were running Google Cloud do, and it was. I think it came down to DNS. It was IAM.

Marian Gallagher:

Oh, iam, that's right. Yes, and so it was the fundamental service for controlling authentication and authorization to all of the other cloud services within the platform.

Jon Gallagher:

That's right. They pushed a code out that had the possibility of triggering a null pointer Yep. And then a second bit of code that walked into that area, that triggered the null pointer, and boom. That walked into that area. Yes, that triggered the null pointer, and boom. So these note here that the problem occurred between the iteration between Two separate deployments. Two separate deployments, it wasn't one bit of code, yeah. So God help us.

Jon Gallagher:

If that had been buried in some vibe coded stuff, you'd still have human engineers who were able to go back and say, oh, that's null. How did that end up being null? And bring the context back. That being said, no one is running the compute facilities to do production-level gen AI right now without being a multi-billion dollar corporation. In order to run these models, you have to have enormous compute, enormous power, enormous connectivity, which means that, for the vast majority of users, they are buying. They might've developed using it, but they are buying these services that may or may not have been engineered the way Google Cloud, aws and Microsoft have done it, and all three of those major cloud providers have had major teething problems.

Jon Gallagher:

So, as cloud advocates, we recognize that there's things you have to do to make sure that the problems in the cloud don't take you down, but you have this extra layer of AI that you don't know where it's running and you don't have the ability to go back and understand or audit where every transaction occurred.

Jon Gallagher:

Now you may have some and the layer on top of that circling back to compliance where's your data residency? You may have gross data residency for things like okay, I can assure you this is going to run within the United States, but the way that AI has been trained, a lot of data travels around the world, feeding into these large language models. So how are you assured that the data that you're storing, that you're working with, stays within the context that it's supposed to? That was a question that Klarna had too. The Klarna CEO was challenged hey, you're using a large language model. How do you keep the financial data from your clients from ending up being in someone else's model, and that is a big worry. It took us on a slight diversion there because let's do some traditional cloud worry.

Jon Gallagher:

But the problems that we've been addressing throughout the history of this podcast have not gone away. They've become even more fundamental to the technologies that are built on top of the cloud, and the hype and excitement and icing on this cake is hiding from many people that those problems have not gone away way.

Marian Gallagher:

Yes. To go back to the question that sparked this episode, does generative AI disrupt SaaS? I think we would argue you have to use the same evaluation framework as you always have for build versus buy, and generative AI could help you build faster, but that doesn't fundamentally change the framework when you're determining, am I writing software with a unique business value or can I purchase this process competency from a third party?

Jon Gallagher:

And many times the question for that is never answered. There's kind of a thumb on the scale on this build versus buy because of the hype around AI, the ideas of vibe coding and the software engineering tools that brings with it that suddenly building is much less of a risk. I would argue it's more of a risk and it's particularly more of a risk if you don't know why you're making the decision. If you didn't really say fundamentally, I'm going to build versus buy, because build is going to, I'm going to replicate my processes and that's what I'm measuring myself against making a CRM, then you're you're definitely risking money. Risking money and opportunity, cost money and opportunity time without a true goal for this engineering team in place. Why not buy the processes from someone like a auto parts software system or Salesforce or ServiceNow? There's a reason why people go out and get those and build fabulous companies because that gives them time to focus on the processes, the secret sauce they add to the marketplace.

Jon Gallagher:

I think this has turned into one of our soapbox episodes. Hey, that's okay. And just as a final note, we have a example and this may or may not be controversial, but we had, with the new administration in Washington, we had advocacy of. We're going to vibe code the government essentially, the government essentially. We're going to pump facts and figures and good Lord a lot of data in through models and determine that this is a critical function of the government or that is a critical function of the government, and anything that's not a critical function we'll get rid of, and we are working our way through that now. That is something that we as a country and the world are going to have to take on is understand whether this has been a good thing or whether it's fundamentally damaged many critical processes of the government.

Marian Gallagher:

I mean speaking of software that was written a long time ago and we don't have the context of the authors of the software. There are critical systems for things like Social Security written in languages like COBOL. It's a pretty cavalier. Developers think that they can translate with a generative AI model into a more modern programming language and, again, might not know why the decisions were made to write the software in certain ways. They don't have that context, and so this is a process that, if it is attempted, would have to be done extremely carefully and deliberatively, and I don't know if we have evidence that that is occurring.

Jon Gallagher:

It would have to be like when you find a rare book that hasn't been opened for a thousand years. You put on the white gloves and start to carefully peel back with tweezers. There is something wonderful inside of there, but the slightest miscalculation could destroy everything and force us to go back to paper backups. And force us to go back to paper backups and that was one of the things that was a horror story at the beginning of this administration that we still use paper for keeping track of federal retirement. Well, yes, because paper doesn't degrade as quickly as magnetic media. The federal government has electronic media that it can't read anymore, but you can still always read paper. So that's a function of government is to provide the records that can be recalled 100 years down the road, and that is the secret sauce that government gives you going back to this build versus buy. So when government is doing these processes and looking to replace these processes, the replacement has to be just as good as and better for us to take the risk of replacing a system or changing the decisions that have been made before. Okay, now, that's not to say that there aren't some wonderful things that this marketplace is bringing us.

Jon Gallagher:

I'd like to close off by talking about the Box CEO. There's an article in LinkedIn We'll put it in the show notes where he talks about yeah, we're going to be 100% AI company and he talks about how he uses AI. Ironically, ai for him is replacing all the assistance that he would have had 20, 30 years ago working for him as the CEO. You know, think traditionally, maybe even back to the Mad Men era, where executives had assistants that did things like make sure the calendars kept up to date, make sure the correspondence is flowing, make sure that any research material is retrieved from the libraries. I was just reading the LinkedIn article and that's exactly what he's asking his Gen AI to do. Here's a new subject. Go find me some research material on it. Find me a time in the next six months to regularly meet with my trader.

Jon Gallagher:

So in this context, what he's doing is getting the scut work done by AI. None of that stuff is critical is the critical secret sauce that Box adds to the marketplace, but it is critical stuff that needs to get done. So if we're looking for agentic AI where things get done for us so we as people can use our decision making to be more focused on decisions that make money for the company. That tends to be a better thing, but we need to make sure that we're riding herd on it and make sure it's not accidentally stuffing its own pocket somehow. Okay, anything else. You want to hammer on this one Somehow? Okay, anything else. You want to hammer on this one?

Marian Gallagher:

I think we no, I think we've covered some good ground. I would just second that these tools can be very powerful. They can be very useful. We're using them to write code, but we're using them very carefully, them very carefully, and that all the old frameworks and ways that we make business decisions don't change with these new technologies.

Jon Gallagher:

Yeah, it's very easy to say disrupt and then figure out what happened, but the people who come in with a fundamental analysis of where their market position is the old-fashioned business planning they are going to be laps ahead of the people who are still trying to clean up the blood and the broken glass of the disruption that other people are doing Absolutely Okay. Well, that was a good one. Remember, we've done other stuff on AI. If you're looking for a valuable way of implementing AI to help your, let's say, a product or something, go back and take a look at our Blender episode, where Logan introduced some using AI to feed into the Blender animation system to produce animations. And again, this is AI taking on scut work for you so you can work at the top level and let the implementation of moving the bits around be done by AI. So we're not 100% against AI. What we are 100% is thoughtlessly using AI and trying to avoid the hard work of being a manager.

Announcer:

Yes.

Jon Gallagher:

Okay, thank you all for your time and we'll see you later. Thank you.

Announcer:

Thank you for listening to Cloud Out Loud podcast. Please let us know in comments if you caught either of the gents calling a product or technology by the wrong name. Other information and suggestions are welcome too, or feel free to tweet us at at cloud out loud pod or email us at cloud out loud at ndhswcom. We hope to see you again next week for another episode of Cloud Out Loud.

People on this episode