As organizations rapidly adopt LLMs for sensitive tasks, like Financial Transactions and Risk Analysis and Healthcare Diagnostics, securing these systems becomes paramount. This talk explores the unique challenges of LLM security & provide practical strategies for building trustworthy AI systems.
The widespread adoption of Large Language Models (LLMs) like GPT-4, Claude, and Gemini has introduced unprecedented capabilities and equally unprecedented risks. Organizations are increasingly deploying LLMs to handle sensitive tasks, from processing medical records to analyzing financial documents. This talk examines the evolving landscape of LLM security and privacy, combining theoretical foundations with a walkthrough of example implementations.
Through real-world case studies of both attacks and defenses and practical implementation guidance using popular security tools, we’ll explore critical vulnerabilities and proven defensive techniques. Special attention will be given to securing fine-tuned and domain-specific LLMs, with live examples using NVIDIA’s NeMo Guardrails, LangChain’s security tools, and Microsoft’s guidance library.
10-time award winner in Artificial Intelligence and Open Source and the co-author of the book ‘Sculpting Data For ML’, Jigyasa Grover is a powerhouse brimming with passion for making a dent in this world of technology and bridging the gender gap. AI & Research Lead, she has many years of ML engineering & Data Science experience in deploying large‐scale low-latency systems for user personalization and monetization on popular social networking apps like Twitter and Facebook, and e‐commerce at Faire, particularly ads prediction, sponsored content ranking, and recommendation with a recent focus on Generative AI. She is also one of the few ML Google Developer Experts and Google Women Techmaker Ambassadors globally. As a World Economic Forum’s Global Shaper, she ensures the leverage of her technical skills and connections for solution-building, policy-making, and lasting change.