Home
Unchained
Security Blog

What cybersecurity professionals are saying about AI

Sue Poremba, Contributing Guest Author

There are a lot of issues that keep CISOs up at night, both figuratively speaking and literally: ransomware attacks, software supply chain attacks that can take the company offline for days, the rogue insider who is downloading sensitive corporate information at 2 a.m. 


But the biggest issue keeping CISOs up at night right now is AI, particularly generative AI. It’s not just the types of attacks that can be launched via gen AI, or the ability to quickly create and spread malware, or even the higher quality social media phishing that can be accomplished. Instead, it is the lack of ability to control and challenge security problems arising quickly from AI. 


“If we don’t figure out how to do things right, CISOs will lose their authority,” said Jay Mar-Tang, Field CISO at Pentera, during a brunch meeting with reporters at RSAC 2024.


AI is the new rogue IT


According to research from ISC2, among respondents who said their company has seen an uptick in threats and attacks in the previous six months, 13% believe they are directly linked to AI, but 41% said they don’t know for sure if the attacks have any direct connection to AI.


And that’s the problem, according to the study. “Arguably, if an AI LLM is working well, you won’t know the difference between automated and human-based attacks. The clues will be more nuanced, such as speed and repetition of attack that appear implausibly fast for a human (or a room full of humans) to conduct,” the report stated.


Not only is it getting more difficult to tell an AI-generated attack, they may more frequently come from within. Just as Shadow IT frustrated security and IT teams a decade ago, Shadow AI is becoming more prevalent across organizations. 


It’s becoming more common for employees to use AI without permission from work leadership, said Rob Clyde, ISACA evangelist and past chair. Gen AI is used on work devices because workers say it increases their productivity, an echo of what they once said about Shadow IT. However, without oversight of how employees are using gen AI and on what devices AI is used, CISOs are unable to monitor for risk. 


AI, third parties, and the software supply chain


As AI becomes more embedded in products, software, and business operations, CISOs have to figure out how to recognize and address potential threats due. “It’s not easy,” said Cathy Polinsky, CTO of DataGrail, during the RSA brunch roundtable, “because third-party products have embedded AI into their products. All it takes is one product or process outside of policies set up between the organization and vendor to create chaos.” 


For example, Company A may have a strict set of policies and processes for third-party risk, including evaluating the technologies used by outside contractors, but one of those vendors, Company B, decides to use a chatbot run with Gen AI in one of their products. Company B does nothing to address the risks around the chatbot — no updates to their privacy policies and no way for company A to be able to turn off the feature or add updates or patches. 


“The average customer has 1,500 systems that they're working with,” said Polinsky. Keeping up with how many of those systems are using some form of AI, what data the AI is collecting, and how it is used is becoming an insurmountable task for many CISOs.


AI needs governance and data classification


When a new technology becomes available, IT teams and users want to get it up and running quickly and learn how it best benefits business operations. That’s what happened with Gen AI — users began to experiment with it as soon as they could without any consideration for the risks or what might happen with the data entered into the system. 


When new technologies are onboarded, they need governance. But in the case of AI, there is a huge gap in governance, according to James Christiansen, VP and CSO at Netskope. And as the introduction of generative AI reaches its second anniversary at the end of 2024, CISOs and legal teams should have a clearer picture on how to make AI-generated data meet compliance requirements. 


But also with data, you can’t protect what you don’t know. To keep data from being misused or leaked through AI, organizations should take a closer look at where the data is and if they need to restructure how their data is classified. Because if you don’t know where your data is, you can’t manage your data. And if you can’t manage your data, you can’t have governance. 


There is still a huge learning curve for CISOs and AI. The ISC2 study found that only four in 10 security leaders have experience in AI, ML, and LLMs, and a quarter of respondents felt they were equipped to deal with AI’s risks. But this honeymoon phase of AI has to come to an end soon because threat actors are only getting more efficient and proficient with the technology. Without educating themselves on how to best approach AI’s risks, CISOs will continue to have a lot of sleepless nights.


Moving secure AI forward


Are you developing and deploying AI models with confidence? The risks associated with third-party AI components and the software supply chain require a proactive approach to security. Chainguard’s latest course — Securing the AI/ML Supply Chain — can help you understand the fundamentals of software supply chain security for AI/ML systems.


Chainguard AI Images provide a hardened, minimal, and secure-by-design foundation for your AI initiatives. Ensure your AI deployments are protected from the ground up — explore Chainguard AI Images today.

Share

Ready to Lock Down Your Supply Chain?

Talk to our customer obsessed, community-driven team.

Get Started