Cloud Out Loud Podcast

Episode 33 - Creating an Animation Application using AI

Jon and Logan Gallagher

Send us a text

In this episode we try out what we've speculated AI, particularly GenAI is good for: expanding the capabilities of engineers as they create new applications.

This application uses Google's Cloud Run  and Google Cloud Storage to get user's specification of an animation, create the animation, then store and play the animation.

This the link to the Github repo for the application.

The AI framework for the application is Google's Gemini and the program generates the animation is Blender. The demo is programmed in Python and uses the LangChain framework to leverage Large Language Models (LLMs) in the application


Announcer:

Welcome to Cloud Out Loud podcast with your hosts John Gallagher and Logan Gallagher. Join these two skeptical enthusiasts or are they enthusiastic skeptics? As they talk to each other about the cloud out loud? These two gents are determined to stay focused on being lazy and cheap as they evaluate what's going on in the cloud, how it affects their projects and company cultures, and sometimes how it affects the world outside of computing infrastructure. Please remember that the opinions expressed here are solely those of the participants and not those of any cloud provider, software vendor or any other entity. As with everything in the software industry, your mileage may vary.

Jon Gallagher:

Welcome back everybody. It's been a while, but this is going to be a podcast we've been promising for a long time, so welcome back. I'm John Gallagher, I'm Logan Gallagher and we are. This is the Cloud Out Loud podcast and we're going to talk about the process of actually using the cloud environment that we've been talking about with a practical application of AI, and I know in previous episodes we've been questioning it, possibly slagging it. We definitely are not sold on the hype of AI, but basic process of making things work but, on top of everything else, helping us scale. So doing all the scout work but at the same time, taking an innovative idea that you can express somehow and maybe blowing it out, maybe giving it a bigger platform to use. So we're going to talk today about an application that Logan's been working on. We hope by the time that this podcast comes out or very soon thereafter, we can make it available through GitHub and Logan. Why don't you give us a little background on what you're trying to accomplish with both the cloud environment and AI in this context?

Logan Gallagher:

Absolutely so, I think. To set the context, I have always been interested in creating art or animation, but I've never found myself very adept at it, and there is a. There's an open source software tool that I have tried to learn for years, called blender, that you can use to create animations, to create 3d images. I have, over the years, tried to pick it up, have found it a pretty difficult tool to learn and have set it down repeatedly, and I had a bit of an aha moment last month where I had always known that blender the tool supports python, that you can run a python script to define the 3d objects that you're wanting to render in Blender and even create animations, all in Python. So I started playing around with having an LLM create the Python script for me to generate an animation in the Blender software and once I got that working and I saw some of the pretty exciting potential there where I was passing it prompts like animate a bouncing ball or animate all of the planets in a solar system rotating the sun and it was able to successfully do that I came up with an app idea Because when I'm teaching classes I like to have lots of demos and this seemed like a fun demo to show off a couple of different pieces of capability.

Logan Gallagher:

So I started working on an app where I would pass a prompt to a web app of an animation that I would like to generate and then that web app would pass that prompt to one of the large language models. I'm using Gemini from the Google Cloud, but this could work with GPT or Clod or Llama from Meta, and then it returned the Python script for me to generate that animation and I'm having my app use the Blender software to render the animation from that Python. So in my app I've loaded in the Blender binary to run the software inside of a container a container and my application is running as two containerized services on Google's serverless container service called Cloud Run and the results have been pretty exciting. So I've been really enjoying exploring some of the capabilities. I've also learned a lot of lessons. It's been a lot of trial and error, so many things where I can highlight potential pothole so that maybe someone else doesn't have to run into that pothole.

Jon Gallagher:

That's what we're here for is to hopefully explore the road ahead and try and maybe make it safe for people as much as we can. So you've been doing it so far in the discussion, but explicitly tell us what kind of tools we're using to do this. First of all, the platform Google's serverless platform, cloud Run, and where you are containerizing it. So there isn't anything special about this. It's just that people always marry or people frequently marry containers in Kubernetes or containers in orchestration, but here you have two containers that are talking to each other. What are you using to do that?

Logan Gallagher:

Yeah.

Logan Gallagher:

So for this type of application where I'm just going to be using it for demo purposes, where I'll pass a request to my web app and then get a response back, I don't need that application to be up and running 24-7.

Logan Gallagher:

And the Cloud Run platform, where you can bundle up your application code into a container image, upload that container image to a repo, like on google's container registry called artifact registry, or a public registry like docker hub. Once you've uploaded that container image to a registry, you can deploy that container image to cloud Run and the platform will run container instances of your application for you serverlessly. Serverlessly, obviously, meaning they're running the servers, you're not paying for long-running provisioned resources and whenever I pass a request to my application, if there is no instance of my application already up and running, cloud Run will spin up a new container instance to handle my request, process the request and return a response. If there is a container instance already up and running, it may pass the request to that instance or, depending on scaling needs, might spin up another instance. It seemed like an ideal platform to run my app.

Jon Gallagher:

It really is. And just to emphasize for the audience, if you're not doing a demo, you're not paying for the compute. There's no server that we're paying for, it's just waiting for the input.

Logan Gallagher:

And in fact you are only paying for that Cloud Run container when it is starting up, when it's processing requests or when it's shutting down. If it's sitting around idle where there are no requests coming in, but there might be an instance of your app that is running somewhere you're not paying for. That Really seemed ideal.

Jon Gallagher:

Particularly for a demo situation. It's wonderful. Now we have had pushback from people who are saying oh, what about a cold start and the amount of time it takes to spin up? There are ways around this. If you're truly worried about responsiveness, you can have an instance that's always running. But I wanted to make sure I emphasized what you said before, which is that the first thing it checks is to see if an instance is already there, so it doesn't have to spin one up. It also has some code that knows this instance has just finished processing. So I'll put your request in queue for this other instance, so you're not paying for two. So behind the scenes, Google is trying to optimize your resource utilization and saving you money. Yeah, so behind the scenes, Google is trying to optimize your resource utilization and saving you money.

Logan Gallagher:

Yeah, so it was really perfect for me. The other reason why it seemed perfect was I wanted to bundle in this other software Blender into my own application. So with containers I could just load the Blender binary into my container image and use it in my application code. So containers were really a perfect way to bundle up and deploy my application.

Jon Gallagher:

Yeah, so that's the second thing about your environment. As you said, blender and the background code is sitting there. If Blender goes through another upgrade, the only change you're making is to your background processing container. The front end and all the code that's necessary for that are completely divorced from each other. You're just passing up the prompt to the LLM in and then the container takes care of invoking whichever version you've specified. So if you recompile the container and suck in the latest blender, no effect on the front end.

Logan Gallagher:

Yeah, and for those two containers I've split my application up into a front end and a back end. The front end is serving a web page. I'm using Python, so I'm using the library flask to set up a really simple web page, and so if someone goes to the application URL, they'll be presented with that web page where there'll be a field where they could input the prompt for the animation they want to create. When they run that prompt, the front-end web app is going to pass that request to my back-end, running as a separate container on Cloud Run, and I have that back-end container locked down so it can only be invoked by the front-. End.

Logan Gallagher:

If someone were to know the URL of my backend application, they would still not be able to access it because they don't have the necessary permissions to invoke that application. So everything has to go in through the front door of my web app. They can't go in through any side channel, and that is ideal for me because rendering animation is not the least expensive thing in the world. So if I want to add any logic for rate control or if I want to add any user authentication at some point, I can enforce that all in my web app and really protect my backend, which is the more expensive component to run.

Jon Gallagher:

Yes, so that the part that the user sees completely independent of the backend. You can easily pop that off and put in some sort of challenge mechanisms or some sort of billing mechanism or some sort of. Are they completely authorized? As you were saying, user identification system, or it could be something that comes off of a. If you were using a mobile app, for example, the mobile app could call the front end and the authentication could occur between the mobile app and the front end. So you have no authentication code in the back end. Nothing's directly doing the backend, it's just happy doing the LLM interface and the rendering interface. Security is a wrapper around it.

Logan Gallagher:

Yeah, and all of that's being handled by Google's identity and access management, iam, so you don't have to add any custom code there on the backend. The frontend does have to sign its request to the back end, so it does have to retrieve some credentials and pass them in the header of the request to the back end, but all of that is still very straightforward.

Jon Gallagher:

We're taking advantage of the cloud. We're leveraging not only the infrastructure for it, but we're leveraging the security aspects of it and the flexibility aspects of it. So back to our original reason for creating this podcast. Now let's talk. You've talked a little bit about it or around it. What specifically are you using to implement this Flask? So you're using Python.

Logan Gallagher:

Yeah, so I'm using Python for the front end and the back end. On the front end, I'm using the Flask library to serve my web page and on the back end I'm also using Flask to serve the API endpoint. So when a request is passed to my back end, I'm using Flask to define the available API paths that are supported by my back end, and when a request goes to a specific path, that will kick off the logic in my back end to call an LLM retrieve the script to use to generate an animation and then, once the animation is generated, it saves that animation file to a cloud storage bucket for long-term storage.

Jon Gallagher:

Now the backend. We're using Python for some obvious reasons. Flask makes the whole API handling very easy, but it's also the fact that the result of the call to the LLM will be Python code. Yeah, and that's what's being injected into Blender.

Logan Gallagher:

Yeah, blender has a Python API, and that is its only API. So Python is its preferred language for running scripts in Blender. So we're standardizing on Python across the board here.

Jon Gallagher:

But it's again because you've compartmentalized the backend and the frontend. The backend it's just easier to do everything in Python.

Logan Gallagher:

But if someone wanted to come in and use React, or someone wanted to use their own language, swift or anything, it's just a case of the front end passing having the right credentials to pass into the back end and make it perform Playing around with the idea where I think it might be fun to maybe have a catalog of pre-written prompts in the front end web page where you could click on one of the pre-written prompts, generate an animation with it, and I think that would require a little more JavaScript. So I've been laying maybe rewriting the front end using a React or Nextjs or something of that sort, and I can totally do that without touching the back end of this modular architecture.

Jon Gallagher:

So I'm kind of jumping in here, because this is a great example of creating an architecture where you're divorcing the needs of the front end and the back end. And I emphasize this because I basically forked your code or cloned out of GitHub and I'm working my own path on this without having to interrupt what you're doing on the back end. So if I wanted to put special flowers or unicorns or sparklies on it, that's not going to interfere with anything that you're doing. So, yeah, and teams have been requiring far too much of knowledge of the entire stack. One of my triggers is obviously AI, but another trigger is full stack engineer, because that implies someone who could do a user interface, user experience and go all the way down to optimizing database calls. That's a unicorn that doesn't exist.

Jon Gallagher:

One side or the other is not going to be pretty, but if we can break that out and have the appropriate skill set involved and have the interface work across that and use the infrastructure like Google Cloud to secure it, now we are truly being as effective as possible. Absolutely. We've talked a little bit about Blender and we've talked about the API interface to Blender, but you know we're talking about AI here. How are you talking to the LLMs that you talk to. It's Python. So what are you using from Python to make your prompts and get the responses?

Logan Gallagher:

to make your prompts and get the responses. The other tool that was a big aha moment for me when I was working on this app was I realized that it would be useful to have some way to define the steps in my workflow, because my backend application is it's taking a request with a prompt and it's passing that prompt to a large language model and then it's receiving a response from the large language model with a Python script and is then running that Python script in the Blender software to generate an animation. Once that animation file is created, it is saving that file to a cloud storage bucket and then, finally, it is passing a response back to the front end with a link to that file in the cloud storage bucket and it's passing that file link as a signed URL. So the user interacting with the front end is only able to access that single file within the bucket for their securing and locking down. My backend architecture and all of those steps are pretty complex each in their own right, and I really found working on this project, the utility of using a framework like Langchain.

Logan Gallagher:

Langchain is a framework for interacting with large language models. It offers a couple of different capabilities, but the capabilities that really attracted me to it and that I'm using in this application are the ability to templatize your prompts, because I am taking in a user prompt and then I'm adding a bunch of additional details to make sure that the large language model formats the Python code in an expected manner. So I'm sort of combining the user prompt with some hard-coded prompt I've already written. Langchain helps me do that, lanechain helps me define the steps of my workflow and it helps me provide an abstraction layer with the LLM. Right now I'm using Google's LLM Gemini, but in the future, if I found that I wanted to use GPT from OpenAI or a cloud from Anthropic or aama from Meta, it's a drop-in replacement and Lagchain could help me with that as well. So Lagchain really helped me define a workflow, templatize my prompts and gave me a nice abstraction layer if I wanted to make additional changes later on.

Jon Gallagher:

So some peer advantages the one we've been hammering on a lot, which is breaking out functionality and hiding capabilities and only focusing on what you actually want to give to the LLM. But for people who are looking to explore the capabilities of different LLMs, this is a big opportunity as well. They can have a task or a number of tasks and see how well the different LLMs LAMA, chatgpt, gemini, et cetera do so. This approach could be a testing framework that allows them to understand which ones they should engage with. Or maybe there isn't any point in engaging and looking at it. Maybe they all have the same capability. Maybe you engage for an economic reason. So emphasizing here that as engineers, we want to give the business the ability to optimize capabilities and cash flow, using libraries like Langchain to do that.

Logan Gallagher:

Absolutely like Langchain to do that Absolutely, and all of that logic that I just defined is about 250 lines of code and a large amount of that is I like to have very extensive error handling in my code. So the actual logic completing all of those steps really does not require a ton of code to all in one file. Presently I was able to get up and running really quickly.

Jon Gallagher:

Obviously, if you're listening to the podcast on audio, you can't see this, but I'm looking where you have your code up on the screen and I would say it's about a 10 to 1, 10 lines of error handling to one line of code, which is the right ratio. Error handling to one line of code, which is the right ratio.

Logan Gallagher:

The other part is the very long hard-coded prompt that we're passing to the model along with the user prompt. So very little of it is actual code. Logic Lanche really helped with that.

Jon Gallagher:

We can stop here and emphasize. It's not a joke when we talk about the fact that it's 10 to 1 error handling to lines of code, because what you're attempting to do here is very complex and the side effects of something as simple as misspelling a key have ramifications that you could spend the day debugging. But a good error handling prompt can point you right to the problem and you can say gosh, darn it self. Gcp is spelled G-C-P. Gosh darn it self.

Logan Gallagher:

Gcp is spelled G-C-P, not G-C-R. Yeah, I think I'm always hesitant to add additional frameworks to my code. So at first I tried to do it with just pure Google client libraries and I did get pretty far. But Langchain did really help me better define the full flow of my app. It gave me some nice features in terms of helping me templatize my prompt. Later, if I want to drop in a different LLM, it'll help me do that as well. So I did try initially not to use any additional frameworks. I don't see the value in using a framework if I can do something without it, but it did actually help a lot.

Jon Gallagher:

Yeah, you want to pick a framework to save you time and effort and, in particular, a framework that's going to save you repeating code.

Logan Gallagher:

Yeah, not just because it's the cool, hot thing.

Jon Gallagher:

Yes, and obviously Langchain is open source, as is Flask. There are a lot of things going on in software engineering, actually in end-stage capitalism itself, where products are degrading and there's a rude term for that that we won't talk about here, but we'll give you a link to it that products these days do not.

Logan Gallagher:

Commercial products do not seem to be maintaining their quality, and we have seen, even in the AI space, even though things have really only been moving and even though we've only seen about 18, 24 months of things really bubbling right now, we've seen projects already become discontinued. I know we have some folks we work pretty closely with that have already encountered frameworks they're using months ago that are no longer supported. So I was very intentional in picking very well supported frameworks that I don't have to worry about or hopefully don't have to worry about going forward. I did investigate some other ones as well Lama Index and Landgraf and those seemed like overkill for what I was trying to do.

Jon Gallagher:

Yeah, they do. They do seem. I think Land Chain is going to be very valuable in this. In this context, yes, of making sure that if things evolve too quickly, it's it's on the Lang Chain team to to keep up or to possibly EOL things. Ok. So the final thing about the the tool space is automation and DevOps. What tools are you using there?

Logan Gallagher:

So I am currently getting all of this set up in a Terraform template so I can define all of the resources that I need to provision the Cloud Run applications, the Cloud Storage Bucket, the API keys. I'm getting them all set up in Terraform so hopefully, when this podcast is published and there's a link to the repo and the show notes, it will be possible to deploy this entire application by running it in Terraform, where I've defined everything in code in a Terraform template and can stand up the application just by deploying that template.

Jon Gallagher:

Note for people who are looking to possibly arbitrage clouds this is probably going to be a GCP-centric Terraform implementation for the time being.

Logan Gallagher:

Yeah, my current version of the app is running in GCP, but all of these components, with some refactoring, everything that we're using here could run on AppRunner and AWS or Azure Container Service and Azure. Both of those platforms offer a serverless container service and object storage bucket, but you got to pick one to start with, so this is running on GCP in the initial version.

Jon Gallagher:

GCP in the initial version and it'll be a bit of a row to hoe to get the security lockdowns that we already have with something like GCP.

Logan Gallagher:

Yes, yeah, gcp does have a lot of nice things out of the box that have to be rolled yourself elsewhere, so it would require some effort. So, given all that effort, what are the results? Yourself elsewhere, so it would require some effort.

Jon Gallagher:

So, given all that effort, what are the results? We've talked a little bit about them so far and you've demoed some for me and I'm frankly blown away by the whole thing.

Logan Gallagher:

How's it turned out so far? So far, so good. I can successfully pass a prompt to my app to generate an animation like a bouncing ball or a solar system of planets, and the backend will generate that animation file, save it and return that file back to the front end and the front end can render it using, or the front end can display it using, a JavaScript library called threejs. Definitely really excited about some of the capabilities and things that I've learned from this application. I'm planning on using it as a demo in the classes I teach, but I think that there are some things we've learned working on this that may be applicable to other applications we're working on in the future.

Jon Gallagher:

Which kind of segues into the last thing I wanted to ask you is lessons learned. So you've talked a little bit about them, but in general, what do you really feel like you've learned and what do you think you can leverage?

Logan Gallagher:

Yeah, I've learned. I've definitely learned the utility of using a framework for defining the steps in my application that for an application that interacts with a large language model. So I do definitely see the utility of using a tool like langchain. Langchain also has some capabilities. If you're wanting to pass large text files to a model, langchain can help you split up that text file into chunks to pass those chunks to a model if the model has any type of input character limit.

Logan Gallagher:

I've learned a lot about how to set up the API keys and successfully pass those API keys into my application. I definitely had to do quite a bit of making sure that my file volume mounts were lined up correctly Somehow. File organization is still the hardest part of software and I am very interested in exploring an area further that I just started to look into in this application called function calling, started to look into in this application called function calling, where you can define a function and that would be just a function in your application code, like defining a function that can get the average of list of numbers or can divide one number by another or do any other thing you might define in a function, but you define those functions and then you would provide those functions to the LLM and the LLM can decide when to invoke those functions. And for a more complex application, I am very interested in exploring further, maybe using function calling it's also known as tool use as I look into some more complex applications.

Jon Gallagher:

That would be huge. I'm thinking about data stream and the stuff that we've used before, where you have these side functions that can be used to enrich data and such. Absolutely, and as everyone who's ever used any of these tools, the generalized approach to data can end up with hallucinations, can end up with bad data and such. So this ability to hook tools in and have the LLM, maybe a tool that determines whether this is true or that is, whether this or that is actually true true in the sense of an answer we're looking for that would be huge for these tools.

Logan Gallagher:

Yeah, that is an area that I'm wanting to learn more about. Did that particular feature wasn't necessary for this application? That probably would have been overkill for this app, but I'm definitely wanting to explore it further in future applications.

Jon Gallagher:

As I said, I'm following you and I've cloned your code and working with this system. There's some stuff that we are thinking about doing as approach to products, but these products it's something that I think we've consistently talked about in previous podcasts that AI is a tool for engineers, in the same way that string processing is a tool. When you first started off in assembly language programming, you had to roll your own string comparison, and now that's stupid. There are libraries to do that. Now we write applications that are going to be more dynamic, that are going to be more scalable, and rather than write the code for this dynamism, scaling, we're going to use these background capabilities to say hey, I want to generate an animation. Well, describe it to me and I'll use the LLM to do a Python, and meanwhile, my application will do more important algorithmic things. Maybe it's you know, this animation is the result of having done a simulation, a market simulation or weather simulation. We have a backend result of okay, I will put it in an animated format to make it easier to understand, so I don't have to hire someone to do that, or I don't have to wait until I can provide this data in an accessible format for people. It just becomes another tool set so we as engineers can focus on the purpose of these programs.

Jon Gallagher:

You know we are sitting here with major weather events going on in the US, in Southern California the Santa Ana winds whipping up fires and major cold fronts moving through the South. If we had the ability to put these scenarios in front of decision makers sooner, maybe the decision makers would be able to react faster. The amorphous oh, we're going to have 80 mile an hour Santa Ana winds. If we could animate the effects of a Santa Ana wind, maybe we'd get focused sooner on deploying resources. That's just a for instance, and something to throw out into the void for people. I like that a lot.

Jon Gallagher:

You know, it's so often that we have these kind of generic weather maps and different kinds of color and we have to tell people well, the blue means it's really, really cold, but red over here means the means that we're going to have high winds and hell. Let's just show a palm tree 25, 50, 80 mile an hour winds and really get people's attention. So I think properly used things like the approach that you've got with this and putting the tools together and being able to take the data and insight from the program and putting it into a better user experience. This is really where AI is going to help us. Sure, it's fun putting a celebrity's face on an inappropriate animal, but the real value is going to help and is putting information in front of decision makers that is easily accessible, just like this demo that you're putting together. I'm very, very excited by it. So last words on this project.

Logan Gallagher:

No, I think what I really liked about it was we have talked about being able to call this functionality like a library, and that's exactly what this code does. And that really made me excited was I was able to call these advanced functionalities like generating an animation using software that I have attempted to learn multiple times over the years, but this was really what.

Jon Gallagher:

with the assist of generative AI, I was able to learn it and and that is where I do see the utility of this technology yes, I think, uh, when, when you mentioned that this is software you've attempted to learn, there's still the opportunity to learn something about blender, if you wanted to make it.

Jon Gallagher:

Through this process, I learned a lot more about and the creative impulses that you have for Blender are not thwarted by this.

Jon Gallagher:

I mean sure you can do some fancy prompts and everything, but at some point in time, if you really want to do an animated movie, now you've got a skill set with Blender that can at least give you a framework and then you can dive deeper into Blender Exactly. I'm kind of talking around the point of so many people saying and rightly so artists saying why are you using these tools? Instead of hiring me and I wouldn't hire someone right now to do a bouncing ball, I couldn't hire someone for the output of my simulation, just the latency of doing that I'm still going to hire people to do the incredible creative work, the scut work or just the generating the equivalent of generating a pie graph with Blender. That's going to kind of be taken away, but hopefully the people who actually understand animation, who can really make this stuff sing I'm hoping we aren't taking away the grunt work. That pays for everything else. I'm hoping that if people use these tools and then appreciate, okay, the next level up there's home movies and then there's Martin Scorsese.

Logan Gallagher:

Yeah, well, I mean to that end when I was writing this application and having the large language model return Python code back. I might not be an animator, but I can write Python code, and so it was very crucial while I was writing this app that I had the capability to debug the Python code that was being returned back from the LLM. So if I didn't have programming experience, I don't think I would not be able to write this application. So having context, having expertise in these areas, is still crucial.

Jon Gallagher:

The human element has got to be there. We're never going to be able to completely automate plumbing, no matter what the context of the plumbing is, whether it's plumbing code or plumbing wastewater.

Logan Gallagher:

Yeah, and if, when you look at the long prompt that has to be passed to the model in order to get that code to be returned back in the correct format, it definitely highlights that this is not a click button solution. I had to include very detailed language in the prompt in order for it to get returned back to me the expected code and the expected format for it to be usable, and this is a debugging and learning effort that ai is never going to be able to do right.

Jon Gallagher:

Uh, it would people forget, is ai can only duplicate what's already been done. Um, it's very. I know the agi say well, eventually we'll be able to extrapolate. No, and I sure don't want them to. I don't want my car to extrapolate what this challenging left-hand turn is going to be. Just give me the wheel, I can see what's going on. Okay, well, anything else on the horizon? We've still got a lot of work to turn this into something deployable, I mean releasable.

Logan Gallagher:

Right? Well, we've got to get this all put a bow on this so that we can include it in the show notes. But afterwards I think there are going to be quite a few patterns from this project that we might leverage again in some other exciting ways.

Jon Gallagher:

Yes, absolutely so. Stay tuned on that, everybody. Yes, okay, well, thank you all for listening. I'm John Gallagher, I'm Logan Gallagher and this has been a Loud Out, loud podcast, and tune in next time when We'll try and make it sooner than this previous interval. Yes, thanks everyone. Bye.

Announcer:

Take care. Suggestions are welcome too, or feel free to tweet us at at cloudoutloudpod, or email us at cloudoutloud at ndhswcom. We hope to see you again next week for another episode of Cloud Out Loud.

People on this episode