Home
Unchained
Security Blog

How NIST is changing standards to safeguard AI

Sue Poremba, Contributing Guest Author

At the RSA Conference 2024, more than one cybersecurity expert called AI the most important innovation since the internet or referred to artificial intelligence and machine learning as the next great disrupter. 


Let’s be up front about AI: it’s been around for years, and it has been a very useful tool in cybersecurity. Rather than taking jobs from qualified cybersecurity analysts, AI handles repetitive and mundane tasks, like scanning logs for anomalies and cutting down on the number of false positive alerts. This allows the human professionals to tackle the more difficult jobs that in the past were pushed aside for those otherwise necessary but time-consuming tasks. Until November 2022, it’s a safe bet that most people didn’t think twice about AI.


November 2022 saw the launch of ChatGPT and consumer access to generative AI, changing everything. Now AI is directly in the hands of anyone with access to a computer or smartphone, and the average person is seeing the power of artificial intelligence first-hand. 


Generative AI can do a lot of good. However, at the same time, there is continuing evidence of it being used in subversive ways. Examples include: disinformation and misinformation campaigns, helping threat actors create more realistic phishing attacks, developing malware at a more efficient rate, and sharing sensitive corporate and personal information, which leads to data leaks. 


In October 2023, the White House released an Executive Order to address responsible AI use. The EO also called for the National Institute of Standards and Technology (NIST) to develop “guidelines, standards, and best practices for AI safety and security.” This includes “developing a companion resource to the AI Risk Management Framework, NIST AI 100-1, for generative AI . . . [and] developing a companion resource to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models.”


Why we need NIST standards for AI


According to research from ISACA, organizations don’t have a plan in place to handle AI, especially generative AI. Even though 70% of respondents say staff are using AI, and 60% say employees are using generative AI (e.g., Microsoft Copilot, Google Gemini, and OpenAI’s ChatGPT), only 15% said they have any type of AI policies in place and 40% said they don’t offer any type of training around AI.


And when training is offered, most of it is centered on the tech teams, Rob Clyde, ISACA evangelist and past chair, said in a conversation at RSAC 2024. There are no ethical policies around AI, either, Clyde added, so employees and stakeholders don’t understand how their behaviors with AI could have a negative impact on the company.


A big problem, said Clyde, is Shadow AI, with people using genAI tools at work without permission from IT or security teams. People use genAI because they say it helps with productivity, but that could also lead to a landmine. In one instance, a member of the board of directors of a well-known company (Clyde wouldn’t reveal the company’s name), decided to make corporate information easier for other board members to find on genAI tools. An entire handbook with sensitive corporate information, including financial figures and proprietary product information, was fed into the AI tool. Now this information is available to anyone who wants it, and the organization is paying the consequences for the data leakage.


Regardless of technical aptitude or usage, the explosion in popularity of generative AI tooling has prompted the need for guidelines around developing and using this new type of software safely.


The new NIST AI standards


To mitigate the threats listed above, NIST has published four new standards. These address the risk management, rules of engagement, information safety, and secure software development guidelines specifically for AI model development.


NIST AI 600-1

Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile — spells out the risks associated with genAI and offers a rigorous set of action plans for governance, third-party risk management, testing and AI red-teaming.


NIST AI 100-5

A Plan for Global Engagement on AI Standards takes a worldwide view of AI development and implementation and builds standards around information sharing, using the priorities in the NIST-developed Plan for Federal Engagement in AI Standards and Related Tools and makes recommendations on the use of AI on a global scale.


NIST AI 100-4

Reducing Risks Posed by Synthetic Content focuses on digital content transparency and how to recognize and address threats caused by AI-generated misinformation/disinformation and deep fakes. This NIST report offers a very detailed look at the numerous ways AI can be nefariously used and the dangers involved and provides suggestions on how to verify AI content as real versus fake.


NIST SP 800-218A

Secure Software Development Practices for Generative AI and Dual-Use Foundation Models “adds practices, tasks, recommendations, considerations, notes, and informative references that are specific to AI model development throughout the software development life cycle.”


Conclusion


All of these new NIST standards are the result of Executive Order EO 14110. They were pulled together quickly, a clear acknowledgement to the dangers and rising concern around generative AI. Combined with guidance spelled out in the Cybersecurity Framework 2.0, the risks evolving from Shadow AI or threat actors can be more effectively mitigated.

If you’d like to learn more about how Chainguard AI Images can help your projects and organization with compliance goals, please visit our AI resource page or feel free to contact us directly.

Share

Ready to Lock Down Your Supply Chain?

Talk to our customer obsessed, community-driven team.

Get Started