Episode 27: Show Notes
Welcome back to Cloud Out Loud as we continue our discussion on generative AI and machine learning. Today is all about exploring the risks of modern machine learning and how we can properly navigate them as a society. Jon and Logan walk us through the benefits of AI tools for software companies, the dangers of poorly-trained generative AI models, why good code may not always be the kept standard, and how to assess the cost-effectiveness of the machine learning models at your company. Then, we dive into our concerns about the data of large language models, what generative AI could mean for the future of the internet itself, the perils of hallucinated AI data, stochastic parrots and other security vulnerabilities of generative AI, and so much more! To hear about the importance of transparency in machine learning and to find out what we’ll be talking about next week, press play now.
Key Points From This Episode:
Tweetables:
“Cleaning and curating your data is the least sexy but most important part of getting any value out of any of these [generative AI] tools.” — Logan Gallagher [04:39]
“We may be increasingly reaching the point where the internet is going to be so full of AI-generated content that our subsequent versions of generative AI models will be a snake eating its own tail.” — Logan Gallagher [21:36]
“This is something that I worry about much more than Skynet — that we end up with fragile systems or we end up with unknown attack surfaces because of frameworks that are being generated for us without our ability to have an audit trail of how this came to be.” — Jon Gallagher [32:29]
Links Mentioned in Today’s Episode:
‘Stochastic Parrots: A Novel Look at Large Language Models and Their Limitations’
Episode 27: Show Notes
Welcome back to Cloud Out Loud as we continue our discussion on generative AI and machine learning. Today is all about exploring the risks of modern machine learning and how we can properly navigate them as a society. Jon and Logan walk us through the benefits of AI tools for software companies, the dangers of poorly-trained generative AI models, why good code may not always be the kept standard, and how to assess the cost-effectiveness of the machine learning models at your company. Then, we dive into our concerns about the data of large language models, what generative AI could mean for the future of the internet itself, the perils of hallucinated AI data, stochastic parrots and other security vulnerabilities of generative AI, and so much more! To hear about the importance of transparency in machine learning and to find out what we’ll be talking about next week, press play now.
Key Points From This Episode:
Tweetables:
“Cleaning and curating your data is the least sexy but most important part of getting any value out of any of these [generative AI] tools.” — Logan Gallagher [04:39]
“We may be increasingly reaching the point where the internet is going to be so full of AI-generated content that our subsequent versions of generative AI models will be a snake eating its own tail.” — Logan Gallagher [21:36]
“This is something that I worry about much more than Skynet — that we end up with fragile systems or we end up with unknown attack surfaces because of frameworks that are being generated for us without our ability to have an audit trail of how this came to be.” — Jon Gallagher [32:29]
Links Mentioned in Today’s Episode:
‘Stochastic Parrots: A Novel Look at Large Language Models and Their Limitations’