Cloud Out Loud Podcast

Episode 27 - Risks of Generative AI

July 06, 2023 Jon and Logan Gallagher
Episode 27 - Risks of Generative AI
Cloud Out Loud Podcast
More Info
Cloud Out Loud Podcast
Episode 27 - Risks of Generative AI
Jul 06, 2023
Jon and Logan Gallagher

Episode 27: Show Notes

Welcome back to Cloud Out Loud as we continue our discussion on generative AI and machine learning. Today is all about exploring the risks of modern machine learning and how we can properly navigate them as a society. Jon and Logan walk us through the benefits of AI tools for software companies, the dangers of poorly-trained generative AI models, why good code may not always be the kept standard, and how to assess the cost-effectiveness of the machine learning models at your company. Then, we dive into our concerns about the data of large language models, what generative AI could mean for the future of the internet itself, the perils of hallucinated AI data, stochastic parrots and other security vulnerabilities of generative AI, and so much more! To hear about the importance of transparency in machine learning and to find out what we’ll be talking about next week, press play now. 


Key Points From This Episode:


  • The risks to consider when implementing AI and/or machine learning in your company.
  • Assessing the best AI tools for software companies and the benefits thereof. 
  • The importance of accurately separating good code from bad code after the initial prompts. 
  • Exploring the dangers of mistraining a generative AI model. 
  • How to know when your AI output is valid and how to monitor the system for updates. 
  • Balancing costs: how cost-effective is your machine learning model for your business?
  • Why we’re concerned about the data that is going into large language models.  
  • How we don’t yet know what machine learning models could mean for the internet’s future.  
  • Our fears surrounding hallucinated AI data and the (possible) universal adoption of bad code. 
  • Some careers that could experience a boom as a result of widespread AI adoption. 
  • Stochastic parrots and the lesser known/discussed security vulnerabilities of generative AI.
  • What we need to focus on to make generative AI and machine learning more secure.   
  • Why more transparency is needed around the data that is produced by generative AI tools.
  • Recapping everything we’ve discussed today and what you can look forward to next time. 

Tweetables:

“Cleaning and curating your data is the least sexy but most important part of getting any value out of any of these [generative AI] tools.” — Logan Gallagher [04:39]

“We may be increasingly reaching the point where the internet is going to be so full of AI-generated content that our subsequent versions of generative AI models will be a snake eating its own tail.” — Logan Gallagher [21:36]

“This is something that I worry about much more than Skynet — that we end up with fragile systems or we end up with unknown attack surfaces because of frameworks that are being generated for us without our ability to have an audit trail of how this came to be.” — Jon Gallagher [32:29] 

Links Mentioned in Today’s Episode:

ChatGPT 

GitHub Copilot 

‘Stochastic Parrots: A Novel Look at Large Language Models and Their Limitations’

‘Undetectable backdoors for machine learning models’

Jon Gallagher on LinkedIn
Logan Gallagher on LinkedIn

Show Notes

Episode 27: Show Notes

Welcome back to Cloud Out Loud as we continue our discussion on generative AI and machine learning. Today is all about exploring the risks of modern machine learning and how we can properly navigate them as a society. Jon and Logan walk us through the benefits of AI tools for software companies, the dangers of poorly-trained generative AI models, why good code may not always be the kept standard, and how to assess the cost-effectiveness of the machine learning models at your company. Then, we dive into our concerns about the data of large language models, what generative AI could mean for the future of the internet itself, the perils of hallucinated AI data, stochastic parrots and other security vulnerabilities of generative AI, and so much more! To hear about the importance of transparency in machine learning and to find out what we’ll be talking about next week, press play now. 


Key Points From This Episode:


  • The risks to consider when implementing AI and/or machine learning in your company.
  • Assessing the best AI tools for software companies and the benefits thereof. 
  • The importance of accurately separating good code from bad code after the initial prompts. 
  • Exploring the dangers of mistraining a generative AI model. 
  • How to know when your AI output is valid and how to monitor the system for updates. 
  • Balancing costs: how cost-effective is your machine learning model for your business?
  • Why we’re concerned about the data that is going into large language models.  
  • How we don’t yet know what machine learning models could mean for the internet’s future.  
  • Our fears surrounding hallucinated AI data and the (possible) universal adoption of bad code. 
  • Some careers that could experience a boom as a result of widespread AI adoption. 
  • Stochastic parrots and the lesser known/discussed security vulnerabilities of generative AI.
  • What we need to focus on to make generative AI and machine learning more secure.   
  • Why more transparency is needed around the data that is produced by generative AI tools.
  • Recapping everything we’ve discussed today and what you can look forward to next time. 

Tweetables:

“Cleaning and curating your data is the least sexy but most important part of getting any value out of any of these [generative AI] tools.” — Logan Gallagher [04:39]

“We may be increasingly reaching the point where the internet is going to be so full of AI-generated content that our subsequent versions of generative AI models will be a snake eating its own tail.” — Logan Gallagher [21:36]

“This is something that I worry about much more than Skynet — that we end up with fragile systems or we end up with unknown attack surfaces because of frameworks that are being generated for us without our ability to have an audit trail of how this came to be.” — Jon Gallagher [32:29] 

Links Mentioned in Today’s Episode:

ChatGPT 

GitHub Copilot 

‘Stochastic Parrots: A Novel Look at Large Language Models and Their Limitations’

‘Undetectable backdoors for machine learning models’

Jon Gallagher on LinkedIn
Logan Gallagher on LinkedIn