Cloud Out Loud Podcast

Episode 36 - Open Source From Hackers to Kubernetes: How Open Source Evolved

Jon and Logan Gallagher

Send us a text

Ever wonder how free software ended up running the world’s biggest clouds? We pull on a thread that starts with the hacker ethos—access, transparency, community—and follows it through OSCON memories, Docker’s breakout moment, and the quiet power shift that came with Kubernetes. From early research culture to the realities of running production systems, we map how open source moved from ideal to infrastructure and why today’s cloud thrives on community-built code.

We look at why Docker’s elegant packaging changed developer workflows but didn’t solve the hardest problem: operating at scale. That’s where Google’s history with Borg mattered, and why Kubernetes arrived fully formed with controllers, declarative state, and battle-tested ideas. Crucially, governance moved to the CNCF, the Cloud Native Computing Foundation, letting every major cloud offer a managed Kubernetes service without locking users in. The result is a durable model for the cloud era: keep the core open and portable; offer the complicated parts—control planes, upgrades, reliability—as a managed service; and let teams build higher with confidence.

If you care about how software is built, funded, and run at scale, press play.

Links to topics from the show:
Hackers - By Steven Levy
Docker
Kubernetes
CNCF
Open Container
Podman


LinkedIn - Logan Gallagher
LinkedIn - Jon Gallagher

Announcer:

Welcome to Cloud Out Loud Podcast with your hosts John Gallagher and Logan Gallagher. Join these two skeptical enthusiasts, or are they enthusiastic skeptics as they talk to each other about the cloud out loud? These two jets are determined to stay focused on being lazy and cheap as they evaluate what's going on in the cloud, how it affects their projects and company cultures, and sometimes how it affects the world outside of computing infrastructure. Please remember that the opinions expressed here are solely those of the participants and not those of any cloud provider, software vendor, or any other entity. As with everything in the software industry, your mileage may vary.

Jon:

Okay, welcome back everybody. We're gonna do something different with this one. We're gonna we've been talking a lot about AI, and it hit us, Logan in particular, that there's been some landmarks that we've missed in the development of the technology that we use, particularly in the cloud, but and and for AI. And that kind of led to a discussion about open source. Because the technology fundamental to the cloud and fundamental to the to AI itself really does come out of the open source community. It's not something that came out of, it certainly didn't come out of commercial. It came out of research at universities, it's come out of, it's been a traditional part of the CS research, um, the the environment that that CS has been part of, but it's also at the same time been part of the milieu of open source. We mentioned this in a particular back in a previous episode, this book, Hackers, by a guy named Stephen Levy. And we'll we'll go back and make sure we get uh give you a link in the in the show notes about this. Hackers is a snapshot in time. It was published in 1983. It was republished 10 years after that in 93, and then again 25 years after in 2010. And at each point in time, so 83, we're looking at that's before the IBM PC came out. That's before Microsoft really became dominant through his operating system. It already existed, but it was doing things like publishing a basic compiler or interpreter. So it was a snapshot in time, but it also delved into the philosophy of hey, these wonderful things about these wonderful computer things, what should we do with them? The book talks about the hacker ethos of information must be free, which was actually developed that I that slogan came out of Stuart Brand, and we'll talk a little bit more about him later later on. But all the personalities in in hackers were wrestling with the fact that you had this universe that was opening up because of access to computing. And one of the one of the things that was a thread through that is should this access to be to computing be just available to the kind of people who could take advantage of it? The prototypical personas of hackers, or should it be open to everybody? And there was an actual, much like in hip-hop, there was kind of an East Coast-West Coast thing about that. The East Coast, mostly dominated in the book by MIT, the West Coast, the nascent Silicon Valley, and East Bay, uh Berkeley. So the West Coast was more about getting computing to the people, whereas East Coast was about how do we make computing more powerful and open to the people who actually need it. The reason why the the thing that came caused this episode to come about, though, was kind of an outgrowth of that. The fact that uh Portland, where we are, became an important source or important center for open source and had a very influential conference that a lot of stuff happened at. And so I'm gonna turn this over to Logan, who I who actually had the inspiration of all this, remembering his visits to Oscon.

Logan:

Yeah, I think we were we were thinking about what topics to cover. Very intentionally, we decided we wanted to cover a topic that wasn't AI. There's been a lot of AI in the discourse, a lot of AI in our episodes. Uh, so as I was looking around, as we're getting to the end of summer, thinking kind of wistfully about an older tradition that we used to have of going to the open source convention down at the convention center here in Portland, and how fun that used to be of going to the conference, uh the convention floor and seeing all these booths for people who wanted to excitedly talk about their projects. And there's a real optimistic and even idealistic energy to the entire event. I loved attending. There were some headlines recently that caught my eye. One was a blog post over from Google celebrating the 10-year anniversary of their Google Kubernetes engine cloud service. Uh, and Google Kubernetes engine launched very shortly after Kubernetes open source project released their 1.0 version, which was in fact at OSCON. I was in the room when they uh announced the 1.0 release. Also, it was at the after party for the 1.0 release. So I was feeling nostalgic about that. And found it interesting that they were celebrating the 10-year anniversary of this managed service version of an open source software where the Google Cloud could run some of the more difficult components of this free framework. You can download and install it on your own computers at home, run it in your own data center. But this cloud platform was offering a managed service version of this free software. And it had just reached 10 years old. And then there's another blog post with the headline announcing general availability of Firestore, Google's uh serverless, non-relational database that came out of their mobile platform. They were announcing that Firestore now has MongoDB compatibility. And Mongo is an open source database that's been popular for now around 15 years. Mongo came out of the big the the trend of developing non-relational databases that were highly scalable, where we could add additional nodes to a database to add capacity. We're less concerned about a highly relational data model where the records that we store in tables have relationships to one another as expressed by their foreign key attributes. We could just have one table where we store records, and that rec that table is very scalable. Mongo came out of that philosophy, came out of that trend. And I found it very interesting that Google was very proudly announcing that you could now use their cloud database, Firestore, but you could, it was now MongoDB API compatible. So clearly, open source is still having an enormous effect on how we develop software, to the point where these big cloud platforms are looking over their shoulders at the open source frameworks and are offering managed service versions of the open source frameworks, where they'll run it for you on their servers and you pay the cloud platform for doing so. And that had me thinking about the many ways that open source has changed. The open source convention, Oscon, discontinued in 2020. O'Reilly, the publisher that ran OSCON, ended their conference business in 2020. It kind of now seems like a bygone era, a more idealistic and optimistic era looking back on it here in 2025. And so we've been reflecting on the ways in which open source has changed and especially how open source evolved in what you might call the cloud era. Broadly, 2010 to present. Maybe we could have some more definitive markers here, 2010 to 2020. And these last five years, I'm not sure what you'll call them, but we've been reflecting lately about all of the ways that open source changed and all of the ways that that change was driven by the rise of cloud computing. We were trading stories back and forth and thinking about some of the tools we still use every day and the significant changes that have occurred with those tools in recent years.

Jon:

For the the folks who don't really have an internal model of what open source is, it's it's seen as a reaction to the fact that people had to pay for software, but it is that's not actually where it came out of. There wasn't a software business as a business, as a packaged software that you could go buy stuff for for the first few decades of computing. Computing is software was something that you had a computer and you hired people to manipulate that computer to produce the results that you want. Put in the data, get the results. It was late in this process that's that companies actually found that multiple possible customers needed that capability, particularly in things like accounting, general ledger, um, those sorts of things, that where you had a common standard in the generally accepted accounting practices that you could write software to, and then turn around and go to your potential customers and say, well, this software obviously conforms to generally accepted accounting practices and will allow you to not have to maintain your own accounts receivable in a computer. You can buy it from us and we'll maintain that software for you. And you can focus on being a trucking company. You can focus on being a Walmart. So the development of package software was kind of a result of that, of looking out, and there's an opportunity that there are standard ways of doing things. We'll have a uh software that captures that, and you will buy this from us, and you won't have to write it yourself. And that ended up with things starting with uh with accounting, it ended up in things like uh manufacturing control, uh process control. So you ended up with a lot of companies that took their subject matter expertise, put it into software, and potential customers bought that from them. Then you had a company like you started with the PC revolution, uh, particularly starting well off with CPM. You had companies that said, okay, we'll take the word processing software. Now, word processing before the PC was a dedicated machine. You bought a Wang, and the Wang had software in it that was that was doing word processing, but you bought the WAN machine, and the software was embedded in the machine, and that's what you that's what you worked with. Then people broke that software out and created things like Word Perfect and eventually Microsoft Word. And you would buy those packages saying, okay, I need to create a document, so I will need to buy this software to do it. At the same time, you still had the people who were who were writing software, but they looked at this software that was written for general consumption and said, why don't we do that? There's no real need to spend thousands of dollars on buying a dedicated word processing software. I could produce that, or me and these these people who are involved in something like that could produce something similar. Now, the folks that were doing this didn't necessarily start off with office automation software like that. They started off with things like code editors, uh, one of the classic open source projects, Emacs, which was developed basically by RMS, Richard Stalling at MIT. And that became an example of a this powerful tool that you didn't have to pay for. It was supported by the community. So rather than going to a particular company and say, I'm gonna buy this code editor, and you will support me, you'll train me, you'll you'll you'll do updates, you invested in Emacs. You started using Emacs, you're part of a community that that trained you or produced materials that you could use to train yourself, and then as bugs were found, you submitted them. The Emacs community made that code editor better. So what you have are two separate pathways to software that's of use to people. The traditional com the now traditional commercial method of commercial packages of software that a company produces, uh sells to you, may sell training, may sell consulting, may sell additions to it, and the open source community, which has this capability that you can use yourself or in conjunction with the community, learn to use. The classic open source system is one that you literally have the source code for, and one of the main reasons that we call it open source. You had this dichotomy and as companies started to move from mainframes to min to mini computers to PCs, companies always had that small C conservative approach of I need to have another company or another entity that's supporting me. I need to pay them, they need to support me. If things go wrong, I'll sue them. The traditional contract contractual basis that business flowed up till then. And open source had a hard time getting traction in that, in the traditional aspects of that, because it wasn't, it didn't look like something a business could be built upon. Meanwhile, you had technologists going into these companies doing things like uh standing up a database and realizing that this database that we paid for is not necessarily providing any more capability than a database that I don't have to pay for. So something like a MySQL becomes popular. MySQL is an open source uh open source system that is really originally developed in Europe. It was brought under the wings of Sun Microsystems, which supported its development. Uh it's now somewhat under the control of Oracle, because Oracle bought Sun. That's one of the problems with open source is that it depends on the generosity of the people who develop it and the people who provide enough money to maintain it. So one of the reasons why we are doing this about why we kind of sparked off of the OSCON uh experience, and particularly the Kubernetes and Docker experiences, is because they are indicative of the problems and opportunities of doing open source. That Docker uh ran into some problems, and well, both of them uh met challeng it looked at the challenges and decided different ways of approaching it, one less successfully than the other. And so we're gonna talk about Docker and containerization in general. We'll talk about Docker specifically, but as we were talking before this, the term Docker has come to be like Kleenex for facial tissue. Uh it's a general it's a word for the general approach to containerization. And then we'll talk about Kubernetes. So uh Logan, set the stage for us.

Logan:

Yeah, so we if we're thinking about open source in the era of the emergence of cloud computing, uh, where cloud computing really began to gain popularity in the 2010s. The the major cloud platforms all released their first cloud services in the mid to late 2000s, so Amazon starting their cloud in 2006, Google shortly after, um, these other platforms like Microsoft Azure following suit after that. One of the most significant technologies that's really arisen at the same time as the cloud platforms is containerization. And containerization is standing on the shoulders of many previous technologies. It comes from concepts that are part of the original open source operating systems, like Unix with the chirut jail concept and later contributions to other open source operating systems like the Linux kernel and the contribution of uh features like uh what are known as C groups that can allow you to isolate specific compute resources to a certain process on the operating system. In 2012, at OSCON in Portland, Oregon, uh a team announced their new software called Docker that allowed you to bundle up your application code, whether you're running a Python application or a uh Java script uh Java application instead of a JVM, um, whether you're running any other programming language, it it wouldn't matter. You could bundle up that application code along with any of the libraries that that code uses, any of the other packages that you normally the code will would rely on from the operating system. You could bundle it up into this package. Um and that package was called a container image. The idea was you could take that container image and run containers running your software on any computer that had the container runtime software installed. And so Docker announced this software where you can bundle up your your container images and then use the Docker software to run containers from it. And the idea was uh and this is a promise as old as as old as computer science, you could build your application once and run it anywhere. There were many, many, many caveats and asterisks to that, but it was still a very powerful pattern, and people got really excited by this Docker software. It happened to be uh a project that really combined contributions from other open source communities, contributions from Linux and the Linux operating system ecosystem. But the Docker team really bundled this up nicely into very usable software that people could adopt and start uh implementing in their organizations or implementing with their software projects. And it really took off like wildfire. Containerization became an extremely popular pattern. Quickly, people started realizing that containers, it's not just as easy as you bundle up your app into a container and you can run it anywhere and you can run it on any server. So we started having to develop additional software around running our applications and containers. And quickly, the Docker team created a company around their open source software and started having to figure out and learning some really hard lessons around how do we run this open source project and how do we run a company around it? How do we make money from it? And they I experienced some really significant growing pains that we, for better or worse, got to watch.

Jon:

Docker folks had a fabulous idea, introduced a fundamental technology that, as Logan said, pulled so many things together and solved so many problems. But the question is whether you could make a business out of that. Whether the problems which are fundamental exist for everybody, were whether they could be put into a container and sold. It quickly became obvious that you had this open source approach to containerization and people would make their containers and run their containers and say there were problems with coordination. I need to run a container that looks like this. If I'm running a website and containers are providing the back-end processing for my website, how do I make sure that my website is providing Spanish uh pages to people in Spain and English people for other countries? How do I have both of them running at the same time?

Logan:

And and additionally, how do I make sure that if this application needs a gigabyte of RAM in order to run successfully or it needs a certain amount of CPU, how do I make sure that when I'm starting up that container on a server, that that server has the available resources for that container? And if I want to run five containers or ten containers or 200 containers, how do I make sure that there's available capacity amongst the servers in my data center or the servers in my cloud provider to be able to support that scaling? If I want to be able to use containers to run my website at web scale, and if I have a popular product or a popular app and everyone goes to my website all at the same time, how do I how do I handle that scaling? And how do we build an infrastructure to run all those containers? Quickly became a hard problem. And a lot of people started offering solutions. Some of those solutions were open source projects, and we quickly started to see the development of various container orchestrators and schedulers to help us run those containers on our fleets of servers. There was Rancher and Misos and Docker Swarm. Docker submitted their contender to the container orchestration contest. And they were all fighting to become the preeminent tool you would use to run your containers on your computer.

Jon:

Where we are right now, assuming that we're back at 2012 and we're running forward from there, where we were was containers are a really great idea for packaging software for creating a runtime environment. The problem, as Logan pointed out, was when that runtime environment needs to scale, when that runtime environment needs to scale either out or in. If you have a website that can handle thousands of users, what happens at 3 a.m.? Are you still you still want to pay all that money for your noon load for your 3 a.m. load? So you have a lot of people who had some really good ideas about this, but this is a different kind of problem. The doc the Docker team in developing containerization solved the problem of software packaging, and that's where their expertise was. The second problem is a completely different problem, which is how do you operate computing at scale? How do you accept, let's say, a million search requests a minute across the world? How do and if you're providing ads, how do you insert ads across the world? Now, this should start to sound familiar to people who actually know the history of this, but there there are there were companies, and there is a company who is particularly good even at the time at developing scalable systems. And so that company now gets involved.

Logan:

Yep. So in this, we're in the setting we're in in early 2010s, Google starts looking across the uh the tech community, starts looking across the open source community and the development and popularity of containers, and they start thinking to themselves, hey, we've been running containers, we've been doing containers actually for a lot of years, way before Docker became popular. And hey, we already have a container orchestration and scheduling framework that we use to run our containers across our fleets of servers internally here at Google. And that platform, that container orchestration and scheduling platform, internally we call it Borg. And yes, that is a Star Trek reference. And maybe Borg can't be directly externalized to the larger tech community. It has too many Google specific features. But maybe we submit our contender to the container orchestration wars. We see Rancher and Misos and Docker Swarm. Maybe we might submit Kubernetes. We'll task a team to rewrite Borg, rewrite it to become open source software that we can release for free to the open source community.

Jon:

And by the way, what was the name of the project to rewrite Kubernetes? I rewrite Borg so that humans could use it?

Logan:

It was uh another Star Trek reference. It was uh it was Jay Ryan's character. I'm sorry, you gotta have to remind me the name.

Jon:

Seven of Nine. Yep, seven of nine. Yeah, you are definitely younger than me. I'm of the generation that immediately remembers Seven of Nine.

Logan:

Um so they they attached a team, that team's name was Seven of Nine, to rewrite Borg. Uh, they rewrote it in a open source programming language that Google had developed called Go or Go Lang and uh released it to the open source community. And immediately upon release, this was already a tool that had all the features you would really need. They had clearly been thinking seriously around the problems like how do you run different types of containers. Some containers might need to be running for a short amount of time to process some data, and then as soon as that data is processed, that container can go away. Other times you need applications that will run continuously, like when you're serving a website. Sometimes you need containers that will store their data in an external database. They're stateless. Other times you need containers for applications that will write their data to a file system. They're stateful. This platform could handle all of those workloads. This platform could handle scaling out the underlying servers as well as scaling out the containers to handle spikes in traffic to your website. They had clearly learned a lot of lessons running Borg, and they released this open source product that was fully featured out from day one and had been battle tested by helping support the largest search engine that had ever been developed.

Jon:

So we have two different things, two different approaches happening here. We have the good idea, the the Docker folks. We have a really great idea for packaging software. And then we have the veterans. We have a company whose business is not selling this software that Google's if if Google charged for Kubernetes, it wouldn't even be a rounding error in their ad revenue or or every other sort of revenue that they have. So they looked at that and said, We are going to participate in the open source community. Which, by the way, Google had always been part of. Google created this technology using open source, Linux mainly, but its tools, the tools it used were open source tools. And they they fed back into the open source community. Uh we mentioned Mentioned the C groups, which were critical for uh uh isolating running processes in a Linux kernel. Google submitted that as a change to the Linux kernel, which made their life easier, but obviously was the spark for containerization. And ironically, Google up till Kubernetes was pretty much known for inspiring open source rather than actually producing open source.

Logan:

Yeah, famously they would often publish a white paper describing this amazing software architecture that they had developed internally. Um, they published a white paper, a research paper describing this uh this data processing architecture where they could take large amounts of data and chunk up that data into smaller pieces and then use a fleet of servers running on commercial grade hardware that could each be assigned a chunk of that data, could process that smaller chunk and then reassemble all of the outputs from all of those worker nodes into the output to process large amounts of data. They called this pattern MapReduce, building on existing uh computer science architecture concepts. And when they published that research paper, it inspired some folks over at Yahoo to create an open source project called Hadoop. And that became the biggest data processing, big data, free open source software out there in the community. And but Google had published the white paper, someone else had to actually productize the software. And they kept on doing that. They described their large database that they use for ingesting lots of data from when they're doing things like uh scraping the contents of the internet. Um, and they described this amazing architecture for what we would later learn was Big Table. Um, and that inspired another big data project, HBase. They've even continued to do this. In 2017, they described an amazing machine learning model they had developed called the Transformer Architecture that was really good at predicting the next word in a sentence, and it could build very human-sounding, human-like paragraphs of text. And some folks over at OpenAI created GPT from that. But Kubernetes is sort of the uh exception here. There are a few others. They developed other tools like TensorFlow, but Kubernetes, they actually assigned a team to write this software in-house and then release it as open source.

Jon:

And importantly, they did something that Docker didn't do is they didn't build a business off of that. Yeah. So Docker releases this container technology and then is like, okay, we need to generate income and revenue. We have this open source component. We've described the container, everyone can do containers now. How do we make money off of that?

Logan:

They started taking venture capital funding. To date, Docker Inc. has raised $540 million in venture capital funding in various rounds. And we're just outsiders looking in, we don't have any direct knowledge of this, but it certainly felt like to us watching this happen, uh, that they were rooting around looking for a way to make a product because they had taken in all this funding.

Jon:

And meanwhile, Google's like, first of all, we don't need this as a business. And second, we don't think it can be a business. That if this is going to be an open source standard, we need to, first of all, divorce it from us because we're Google. And we shouldn't be trusted with influencing the direction of that. There are better ideas about that. We want this to be to be picked up. So Google turned this over to a foundation, turned the guidance of Kubernetes over to a foundation while still providing person power and funding. They no longer control the destiny of Kubernetes.

Logan:

And that also led to larger community buy-in. So they more or less had to stand up this foundation, now the Cloud Native Computing Foundation, and uh and got other large organizations involved in becoming contributors and maintainers of this software project. So all of Google's competitors in the cloud space all became major contributors and maintainers of this shared project of Kubernetes.

Jon:

So now you've got Kubernetes, which is across clouds. And so the cloud is a perfect environment for running containers, but the coordination of containers is a huge job that Kubernetes addresses. Cloud providers had provided tools. In the case of uh Amazon, it was the Elastic Container Service. But Kubernetes, not being controlled by a competitor, allowed them, it's another service they could offer and safely offer to their to their users.

Logan:

Yeah, today all of the major cloud platforms run a managed version of Kubernetes. Notoriously, there are some parts of Kubernetes that are difficult to run. Um the the when you have Kubernetes, you have a group of servers, a cluster of servers. And as a part of that cluster, you have uh a subgroup of servers that are called the control plane. They're the brain of the cluster. And that brain of the cluster, if it gets out of whack, it can be extremely painful trying to fix and resync that brain, uh, the control plane. So what the cloud platforms offered, and I think we both have some some scar tissue from having to deal with that in past uh in the past. So uh the cloud platform said, hey, we'll offer you a managed version of this where we'll run the brain for you. We'll run the control plane, and uh you can set up a Kubernetes cluster in your cloud environment really quickly and then run your containers on it. You get all of the advantages of Kubernetes, but you don't have to run the hard parts. And that's now we have Amazon with uh EKS, the Elastic Kubernetes service. Azure has AKS, the Azure Kubernetes Service, or maybe Azure Container Service, but with a K. And then Google with GKE, which just hit its 10-year anniversary for Google Kubernetes engine. Uh, since they developed the software, it might make sense that they're pretty good at running it for you. Um and now all the cloud platforms, even some of the minor players as well, have a managed version of Kubernetes to run this free and open source framework. You could go to the Kubernetes website, Kubernetes.io slash download, and download the software onto your own computer for free. But it is worthwhile for many people to pay a cloud platform to run it as a managed service.

Jon:

That seems to be the sweet spot for open source. It was people cast about a lot trying to figure out how do we make money off of open source. Now, obviously, people want to get rich, but but objectively, you you need to pay your way in this world. Someone needs to pay pay the rent and keep the lights on. And how does that happen with open source? For the longest time, it was out of the goodness of people's hearts that things like Emacs were maintained. And um then you had things like Apache Foundation. So Apache open source web web server created in a foundation that people could feed into that collected the money and and re redistributed. And now with packages like Kubernetes, where there's an expertise, you have a centralized expertise that control plane and many other things about Kubernetes require an expertise that isn't relevant to your development efforts. Why not pay Microsoft and Amazon or or Google to do that for you? And all you in literally in all these cases, all you're paying for is the incremental machines to run Kubernetes on, to run your containers on. So that is a very successful business model that has developed with open source in the cloud area, which is has fundamentally is been based on open source. Um in the beginning, AWS, the virtualization layer, was an open source virtual virtual machine. And in Google, as as Logan mentioned, they've been using containers Lord for decades.

Logan:

And the operating systems of the majority of servers have been open source, or at least from the open the Linux family tree. Exactly.

Jon:

And so Linux and in some cases BSD are fundamental to the cloud itself. That's where open source is. It's in the infrastructure, it's in the pipes. Uh, it isn't necessarily showing up, certainly not showing up for the casual user. It shows up for developers, and we'll be in next episode, we're going to be talking more about the development tools and the development environments that have come up. We'll be talking about databases, we'll talk a little bit about uh editors and the fact that Microsoft took a product that cost hundreds, if not thousands, of dollars and made it completely open source for everybody.

Logan:

Yeah, I think where we are now is that Docker's still around, but in many ways it is sort of a uh a Kleenex tissue situation. People say Docker, but they're referring to containers, and many times they're not actually running Docker software. They're running uh other containerization software, like I use Podman. Um Docker itself is now running on the uh the underlying open container standards that had to come out of Kubernetes because Kubernetes wanted nothing with Docker. And uh so now Docker has had some problems and Kubernetes uh remains preeminent. We're gonna, in our next episode, take a look at some of the other growing pains that have that many open source projects have gone through during the what we could call the cloud era.

Jon:

Excellent. Okay, so uh come back for our next episode where we'll be talking about developing and using the open source tools that we have available, the creating products out of that. We'll also be talking a little bit about security. Yeah. And that was that was one of the fatal or the business community in looking at open source considered that the nail in the coffin for open source, that that they couldn't guarantee that open source was secure. Well, we've learned a lot since then. A lot of commercial packages have shown security flaws. And one of the great things about open source, as we'll talk about, is its ability to be agile, to its ability to respond to security challenges. So uh thank you for tuning in on this episode, and please join us for the next one.

Logan:

Thank you. See you next time.

Announcer:

Thank you for listening to Cloud Out Loud Podcast. Please let us know in comments if you caught either of the gents calling a product or technology by the wrong name. Other information and suggestions are welcome too. Or feel free to tweet us at at cloudoutloud pod or email us at cloudoutloud at ndhsw.com. We hope to see you again next week for another episode of Cloud Out Loud.

People on this episode