What is AI Security?

Post by Madeleine Xia 105d ago
Articles & Editorial
In an increasingly digitalized society where artificial intelligence (AI) plays a growing role, there are continuously higher demands for security and privacy within the rapidly emerging systems.

While AI has the capability to automate, streamline, and optimize various processes and systems, it also brings new and complex security challenges. To protect sensitive data, maintain trust, and ensure the stability of societal systems that depend on these AI systems, it is crucial to secure them against learning incorrect things, making mistakes, and leaking information.

In this newsletter, we will explore why AI security is important, what it entails, and what can be done to ensure that AI systems protect and strengthen us rather than expose us to risks.

Why is AI Security Important?

Just as AI can create significant opportunities, it can also introduce substantial new security risks. It is therefore vital to secure AI systems throughout their lifecycle to effectively protect against potential threats. As Dr. José-Marie Griffiths, President of Dakota State University (DSU), states, it is important to integrate security measures from the beginning so that security becomes a central part of the AI system and not just a patch after the fact. Furthermore, General Timothy Haugh, head of the NSA, points out that it is critically important to implement security measures upfront to handle potential threats that could jeopardize both individual privacy and national security.

AI Security is about securing AI systems. This means ensuring the AI system is safeguarded to prevent it from:
 

1. Learning Incorrect Things

AI systems risk learning incorrect or harmful patterns if they are not closely monitored or controlled. This can happen either through the use of incorrect or biased data, or if the system is exposed to manipulation by malicious actors. Two clear examples of how the system can learn incorrect patterns are through "Data Poisoning", where incorrect data is deliberately fed into the training, and "Input Manipulation", where input data is manipulated to mislead the AI.

2. Making Mistakes

Sometimes, AI systems may make decisions or act in ways that are not intended or desirable. It is therefore important to carefully define and limit the AI system’s decision-making capabilities and what it can influence. An example of this is "Generative AI Hallucinations". Such occurrences happen when a generative AI system, such as text- or image-generating models, creates inaccurate or irrelevant output that does not correspond with the real or desired information.

3. Leaking Information

AI systems are often trained on sensitive information or handle such. This requires strict security measures to protect the data from falling into the wrong hands. It is crucial to ensure that these systems neither accidentally nor through manipulation leak incorrect information or expose data in ways that could harm individuals or organizations. The security risks include threats to privacy and intellectual property, theft of AI models, and exfiltration of training data, as well as re-identification of anonymized data.
 

To improve and protect our systems, it is important to understand how our AI is created and functions, as well as its capabilities and limitations. In upcoming newsletters, we will continue to explore the concept of AI security. We will also discuss various methods to prevent AI from learning harmful patterns, making incorrect decisions, or inadvertently leaking information.
 

Contact us if you are a partner in AI Sweden and want to learn more or engage with AI Sweden in AI security.

Sign up here to receive updates from the AI Security Newsletter!

Related material

“Suite of Tools for the Analysis of Risk (STAR) Fact Sheet”: 

https://www.cisa.gov/resources-tools/resources/suite-tools-analysis-risk-star-fact-sheet 

CISA's publications:

https://www.cisa.gov/resources-tools/resources?f%5B0%5D=resource_type%3A43

July 8, 2024 by Madeleine Xia

Attributes

Articles & Editorial