• Securing GenAI: Secure our Future

  • Sep 10 2024
  • Length: 23 mins
  • Podcast

Securing GenAI: Secure our Future

  • Summary

  • This episode of Trustworthy AI : De-risk business adoption of AI is brought to you by Trusted AI. Securing Generative AI, LLMs is a critical topic for adopting AI.

    Host Pamela Gupta, CEO Trusted AI talks with Steve Wilson leader of the LLM Governance & Cybersecurity OWASP , Product officer at Exabeam, author Developers Playbook for LLM security, O’Reilly

    Why is Securing GenAI critical?

    Organizations are increasingly prioritizing value creation and demanding tangible results from their Generative AI initiatives. This requires them to scale up their Generative AI deployments— advancing beyond experimentation, pilots and proofs of concept.

    Adversaries are increasingly harnessing LLM and Generative AI tools to refine and expedite traditional methods of attacking organizations, individuals, and government systems.

    Organizations also face the threat of NOT utilizing the capabilities of LLMs such as a competitive disadvantage, market perception by customers and partners of being outdated, inability to scale personalized communications, innovation stagnation, operational inefficiencies, the higher risk of human error in processes, and inefficient allocation of human resources. Understanding the different kinds of threats and integrating them with the business strategy will help weigh both the pros and cons of using Large Language Models (LLMs) against not using them, making sure they accelerate rather than hinder the business’s meeting business objectives.

    The OWASP Top10 for LLM Applications Cybersecurity and Governance Checklist is for leaders across executive, tech, cybersecurity, privacy, compliance, and legal areas, DevSecOps, MLSecOps, and Cybersecurity teams and defenders. It is intended for people who are striving to stay ahead in the fast-moving AI world, aiming not just to leverage AI for corporate success but also to protect against the risks of hasty or insecure AI implementations. These leaders and teams must create tactics to grab opportunities, combat challenges, and mitigate risks.

    Steve Wilsons new book - https://www.oreilly.com/library/view/the-developers-playbook/9781098162191/

    Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.?
    With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website.

    Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures.

    Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up.

    For questions or comments on this podcast reach out to me.
    To request an AI adoption assessment

    Show More Show Less
activate_Holiday_promo_in_buybox_DT_T2

What listeners say about Securing GenAI: Secure our Future

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.