July 3, 2024
Exclusive: What will it take to secure gen AI? IBM has a few ideas


As organizations increasingly look to benefit from the power of generative AI, security is a growing challenge.

Today technology giant IBM is taking aim at gen AI risks with the introduction of a new security framework aimed at helping customers address the novel risks posed by gen AI. The IBM Framework for Securing Generative AI focuses on protecting gen AI workflows across the full lifecycle, from data collection through production deployment. The framework provides guidance on the most likely security threats organizations will face when working with gen AI, as well as recommendations on the top defensive approaches to implement. IBM has been growing its gen AI capabilities over the past year with its watsonX portfolio which includes models and governance capabilities.

“We took our expertise and distilled it down to detail the most likely attacks along with the top defensive approaches that we think are the most important for organizations to focus on and to implement in order to secure their generative AI initiatives,” Ryan Dougherty, program director, emerging security technology at IBM Security, told VentureBeat.

What’s different about gen AI security? 

IBM has no shortage of experience and technology assets in the security space. The risks that face gen AI workloads in some respects are similar to any other type of workload and in other respects, they are also new and unique.

The three core tenets of the IBM approach are to secure the data, the model and then the usage. Underlying those three tenants is an overarching need to ensure that throughout the process there is secure infrastructure and AI governance in place.

Image credit: IBM

Sridhar Muppidi, IBM Fellow and CTO at IBM Security explained to VentureBeat that core data security practices such as access control and infrastructure security remain critical in gen AI, just as they are in all other forms of IT usage. 

That said, other risks are somewhat unique to gen AI like data poisoning where false data is added to a data set that can lead to inaccurate results. Bias and data diversity are another set of particular risks in gen AI data that need to be addressed. Muppidi noted that data drift and data privacy are also risks that have particular gen AI attributes that need to be secured.

Muppidi also identified prompt injection, where a user attempts to maliciously modify the output of a model via a prompt, as another emerging area of risk that requires organizations to have new controls in place.

MLSecOps, Machine Learning Detection and Response and the new AI security landscape

The IBM Framework for Securing Generative AI is not a single tool, but rather a set of guidelines and suggestions for tools and practices to secure gen AI workflows.

There also isn’t any single term to define the different types of tools that are needed to secure gen AI. The emergence of generative AI and its associated risks is leading to the debut of a series of new categories in security including Machine Learning Detection and Response (MLDR), AI Security Posture Management (AISPM) and Machine Learning Security Operation (MLSecOps) 

MLDR is about scanning models and identifying potential risks, while AISPM is similar in concept to Cloud Security Posture Management (CSPM) which is all about having the right configuration and best practices in place to have a secure deployment. 

“Just like we have DevOps and we added security and call DevSecOps, the idea is that MLSecOps is a whole end to end lifecycle, all the way from design, to the usage and it provides that infusion of security,” Muppidi said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



Source link