Chainguard joins Coalition for Secure AI with OpenAI, Google, Anthropic
As generative AI revolutionizes software development, organizations face the challenge of integrating these powerful tools without compromising security. Today’s rapid adoption of generative AI mirrors the early days of open-source software (OSS). Everyone is racing to harness this new technology to scale their work and boost efficiency, but many overlook the security implications. AI is untested in many ways and hasn't undergone the rigorous security review necessary before being deployed into production environments.
That’s why Chainguard is proud to join the Coalition for Secure AI (CoSAI) as a founding member. Hosted by the OASIS global standards body, CoSAI is an open-source initiative that provides practitioners and developers with the guidance and tools needed to create Secure-by-Design AI systems.
CoSAI includes founding members Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, Google, IBM, Intel, Microsoft, NVIDIA, OpenAI, PayPal, and Wiz. The coalition will foster a collaborative ecosystem to share open-source methodologies, standardized frameworks, and tools for securing AI.
Establishing trust and security in AI
CoSAI seeks to offer clear best practices and standardized approaches to help developers mitigate potential vulnerabilities and create secure AI systems. At Chainguard, we already live and breathe this mission. We look forward to partnering with other industry leaders to drive the ecosystem forward at this critical moment.
To kick-start its efforts, CoSAI has formed three initial work streams:
Software supply chain security for AI systems: Enhancing composition and provenance tracking to secure AI applications, an area where Chainguard's expertise is particularly relevant. Our founders, Dan Lorenc and Kim Lewandowski, developed the original Software Levels for Supply Chain Artifacts (SLSA) framework, and we look forward to expanding that framework to include AI models.
Bridging the cybersecurity gap: Addressing investments and integration challenges in AI and classical systems.
AI security governance: Developing best practices and risk assessment frameworks for AI security.
Securing the software supply chain for AI
"As we witness AI workloads evolving beyond simple applications to more sensitive and critical functions, ensuring their security becomes paramount,” explains Kim Lewandowski, co-founder and Chief Product Officer at Chainguard. “The current landscape is fragmented, with developers navigating through inconsistent and siloed guidelines. At Chainguard, we are excited to join CoSAI and contribute our expertise in creating secure-by-design AI systems.”
Chainguard's extensive experience in software supply chain security, including the instrumental roles of Dan Lorenc and Kim Lewandowski in creating SLSA at Google, makes us a valuable contributor to CoSAI's mission. Chainguard will play a crucial role in extending SLSA provenance to AI models, enabling a deeper understanding of how AI systems are created and handled throughout the software supply chain.
Chainguard will also have a representative on CoSAI's Project Governing Board, a technical seat composed of distinguished leaders from each of the founding organizations. This direct involvement will allow us to actively shape the development of AI security standards and best practices.
Developers need a framework for AI security that meets the moment and responsibly captures the opportunity. Together, we can set new benchmarks for AI security, ensuring that innovation progresses on a foundation of safety and reliability. To learn how you can support CoSAI, visit coalitionforsecureai.org.
Want to get ahead of the curve in developing securely with AI? Sign up for the waitlist for Chainguard's upcoming AI security courses and be among the first to gain access to the latest knowledge and best practices: https://get.chainguard.dev/ai-course-waitlist.
Ready to Lock Down Your Supply Chain?
Talk to our customer obsessed, community-driven team.