Like many industries, Artificial Intelligence has taken the security industry by storm. Security practitioners now are faced with the challenge of understanding new classifications of threats and new techniques of attack. Threat Actors are utilizing AI to improve their attacks, while also exploiting AI services. AI and Generative AI utilize many types of new technologies to build services that are used to improve efficiency and offer new solutions to problems of the past. Of course, along with this new technology are brand new ways to use and abuse them. In this blog post we will be sharing several resources to help get you started to understand the prerequisites to AI Security.
Generative AI (GenAI) leverages various data stores, including vector databases and model services, which presents new risks at the data and authentication layer. It also exposes a whole host of unique attacks against the generation layer. GenAI has spawned a new generation of attacks that are more creative and interesting than the same old XSS and authentication bypass we've been discovering in the AppSec world for years.
To get familiar with this new AI world and tech stack, we have to understand the components within the systems to understand why and how Large Language Models (LLMs) and other GenAI systems work the way they do. In this post, we’ll give you some resources to quickly get you up to speed on how they work on the threats to think about when building and securing them.
First, we have to learn the basics to understand the threats. The following is a slide deck about the road to GPT-3, one of the most prominent LLMs, created by OpenAI . This slideshow covers how LLMs work at a high level and outlines what you should and shouldn't expect from them.
If you're a visual person, you may like the next link, as it shows how LLMs work in a visual way. It shows the various steps LLMs take to create results.
Once you're up to speed on the basics of LLMs and the various components under the hood, we can dive into some of the threats. While the following post is older, Microsoft did some excellent work here working through threat modeling AI systems.
In 2022, Microsoft did another post about more specific attacks that was released. This is a good checklist to go through when threat modeling GenAI systems.
Another in-depth article on the threats to AI systems is from Rahul Zhade, who helped create the OWASP LLM Top Ten. This article covers a lot of the attacks against GenAI and models.
For additional great content on attacking GenAI and models, here is a great reading list for more!
We hoped you learned more about AI and AI security, it’s an exciting new world to work in. Like we say in security, there's always something new to learn! Stay tuned for more GenAI and security content in the coming months—subscribe to stay informed!