NIST Report Identifies Cyberattacks That Exploit AI Systems

NIST Report Identifies Cyberattacks That Exploit AI Systems

U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) releases report on the vulnerabilities of AI systems to cyberattacks.

As artificial intelligence (AI) becomes increasingly integrated into our daily lives, the potential for cyberattacks targeting AI systems has become a growing concern. The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) recently released a report titled “Trustworthy and Responsible AI,” which identifies the various types of cyberattacks that can manipulate AI systems and offers mitigation strategies. This report sheds light on the vulnerabilities of AI systems and the need for better defenses to protect against potential attacks.

The Threats of Adversarial Machine Learning

The NIST report categorizes potential adversarial machine learning attackers into three categories: white-box hackers, sandbox hackers, and gray-box hackers. Each category represents a different level of knowledge and access to an AI system. White-box hackers have full knowledge of an AI system, sandbox hackers have minimal access, and gray-box hackers are somewhat informed but lack access to training data. The report emphasizes that all three types of attackers can cause significant damage to AI systems.

AI System Poisoning and Abuse Attacks

One of the main threats identified by the NIST report is AI system poisoning, which involves introducing corrupted data into an AI system during its training phase. For example, a bad actor could slip instances of inappropriate language into conversation records, causing a chatbot to interpret them as common enough parlance to use in customer interactions. Another type of attack is AI system abuse, where incorrect information is inserted into a legitimate source that the AI system absorbs. These attacks aim to repurpose the AI system’s intended use by providing it with false information.

See also  The Potential and Challenges of Generative AI in the Financial Sector

Privacy Attacks and Evasion Attacks

Privacy attacks target sensitive information about the AI system or the data it was trained on. Bad actors can ask a chatbot legitimate questions to reverse engineer the model and find its weak spots. Evasion attacks occur after an AI system has been deployed and aim to change how the system responds to traditional inputs. For example, adding markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs or creating confusing lane markings to make the vehicle veer off the road.

The Challenge of AI Defense

One of the main challenges in defending AI systems is the difficulty of unlearning a taught behavior, even if that behavior is malicious or damaging. The NIST report highlights the need for better defenses and robust assurances to fully mitigate the risks posed by cyberattacks on AI systems. While no foolproof method exists yet for protecting AI, following basic cyber hygiene practices can help reduce the potential for abuse.

Conclusion:

The NIST report serves as a wake-up call to the vulnerabilities of AI systems and the potential for cyberattacks to exploit these weaknesses. As AI continues to permeate various aspects of our connected economy, the risks associated with AI system attacks are only growing. The report emphasizes the need for the community to come up with better defenses to protect against these attacks. While there is no perfect solution at present, adhering to basic cyber hygiene practices can go a long way in mitigating the risks and ensuring the responsible and trustworthy use of AI technology.

See also  12 Best Medical Technology Stocks to Buy Now: Investing in the Future of Healthcare