Tomorrow’s AI Governance and Cybersecurity — Today

In this episode of How Hard Can It Be?, host Alex Schlager sits down with Ken Huang, a leading authority on Generative AI governance, cybersecurity, and risk management.

Ken is a CSA Fellow, NIST GenAI contributor, co-author of the OWASP Top 10 for LLMs, and an international instructor on GenAI security best practices. He also recently co-authored a book on AI governance and cybersecurity, shaping the global conversation on AI safety, compliance, and red teaming.

We dive into:

  • Why securing LLMs and Agentic AI is uniquely challenging
  • The upcoming CSA paper on Agentic AI Red Teaming • How organizations can stress-test AI systems beyond traditional pen testing
  • The essential frameworks and standards for AI governance (NIST, OWASP, CSA)
  • What the global community needs to align on for AI safety and policy

If you’re a security leader, AI builder, or policymaker, this episode will give you a front-row look into the future of AI governance and cybersecurity.

Learn More About Ken Huang’s WorkExplore more of Ken’s insights, research, and publications at https://distributedapps.ai

👉 Subscribe to How Hard Can It Be? for more conversations at the intersection of AI, security, and enterprise strategy.

Get in touch

Become part of our growing community. Reach out with topic suggestions or questions. We’d love to hear from you!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.