Skip to content

Latest commit

 

History

History
52 lines (39 loc) · 2.49 KB

the-developer's-playbook-for-large-language-model-security.md

File metadata and controls

52 lines (39 loc) · 2.49 KB

The Developer's Playbook for Large Language Model Security

home

Cover Image

Details

  • Title: The Developer's Playbook for Large Language Model Security
  • Subtitle: Building Secure AI Applications
  • Authors: Steve Wilson
  • Publication Date: 2024
  • Publisher: O'Reilly
  • ISBN-13: 978-1098162207
  • Pages: 200
  • Amazon Rating: 5 stars
  • Goodreads Rating: 3.67 stars

Links: Amazon | Goodreads | Publisher

Blurb

Large language models (LLMs) are not just shaping the trajectory of AI, they're also unveiling a new era of security challenges. This practical book takes you straight to the heart of these threats. Author Steve Wilson, chief product officer at Exabeam, focuses exclusively on LLMs, eschewing generalized AI security to delve into the unique characteristics and vulnerabilities inherent in these models.

Complete with collective wisdom gained from the creation of the OWASP Top 10 for LLMs list—a feat accomplished by more than 400 industry experts—this guide delivers real-world guidance and practical strategies to help developers and security teams grapple with the realities of LLM applications. Whether you're architecting a new application or adding AI features to an existing one, this book is your go-to resource for mastering the security landscape of the next frontier in AI.

You'll learn:

  • Why LLMs present unique security challenges
  • How to navigate the many risk conditions associated with using LLM technology
  • The threat landscape pertaining to LLMs and the critical trust boundaries that must be maintained
  • How to identify the top risks and vulnerabilities associated with LLMs
  • Methods for deploying defenses to protect against attacks on top vulnerabilities
  • Ways to actively manage critical trust boundaries on your systems to ensure secure execution and risk minimization

Contents

  1. Chatbots Breaking Bad
  2. The OWASP Top 10 for LLM Applications
  3. Architectures and Trust Boundaries
  4. Prompt Injection
  5. Can Your LLM Know Too Much?
  6. Do Language Models Dream of Electric Sheep?
  7. Trust No One
  8. Don’t Lose Your Wallet
  9. Find the Weakest Link
  10. Learning from Future History
  11. Trust the Process
  12. A Practical Framework for Responsible AI Security