Gen AI Security: An Introduction and Resource Guide

Gen AI Security: An Introduction and Resource Guide
But really it's just turtles

Like many industries, Artificial Intelligence has taken the security industry by storm. Security practitioners now are faced with the challenge of understanding new classifications of threats and new techniques of attack. Threat Actors are utilizing AI to improve their attacks, while also exploiting AI services. AI and Generative AI utilize many types of new technologies to build services that are used to improve efficiency and offer new solutions to problems of the past. Of course, along with this new technology are brand new ways to use and abuse them. In this blog post we will be sharing several resources to help get you started to understand the prerequisites to AI Security.

Generative AI (GenAI) leverages various data stores, including vector databases and model services, which presents new risks at the data and authentication layer.  It also exposes a whole host of unique attacks against the generation layer. GenAI has spawned a new generation of attacks that are more creative and interesting than the same old XSS and authentication bypass we've been discovering in the AppSec world for years.

To get familiar with this new AI world and tech stack, we have to understand the components within the systems to understand why and how Large Language Models (LLMs) and other GenAI systems work the way they do. In this post, we’ll give you some resources to quickly get you up to speed on how they work on the threats to think about when building and securing them.

First, we have to learn the basics to understand the threats. The following is a slide deck about the road to GPT-3, one of the most prominent LLMs, created by OpenAI . This slideshow covers how LLMs work at a high level and outlines what you should and shouldn't expect from them.

road-to-chatGPT.pdf

If you're a visual person, you may like the next link, as it shows how LLMs work in a visual way. It shows the various steps LLMs take to create results.

LLM Visualization
A 3D animated visualization of an LLM with a walkthrough.

Once you're up to speed on the basics of LLMs and the various components under the hood, we can dive into some of the threats. While the following post is older, Microsoft did some excellent work here working through threat modeling AI systems.

Threat Modeling AI/ML Systems and Dependencies - Security documentation
Threat Mitigation/Security Feature Technical Guidance

In 2022, Microsoft did another post about more specific attacks that was released. This is a good checklist to go through when threat modeling GenAI systems.

Failure Modes in Machine Learning - Security documentation
Machine Learning Threat Taxonomy

Another in-depth article on the threats to AI systems is from Rahul Zhade, who helped create the OWASP LLM Top Ten. This article covers a lot of the attacks against GenAI and models.

Introduction to Adversarial AI

For additional great content on attacking GenAI and models, here is a great reading list for more!

adversarial-ai-reading-list/README.md at main · rzhade3/adversarial-ai-reading-list
Reading list of more resources to learn about Adversarial Attacks on AI Systems - rzhade3/adversarial-ai-reading-list

We hoped you learned more about AI and AI security, it’s an exciting new world to work in. Like we say in security, there's always something new to learn! Stay tuned for more GenAI and security content in the coming months—subscribe to stay informed!