Identity security solution vendor CyberArk has launched its new tool designed to test AI models and determine potential security issues before problems arise. CyberArk said in a press release announcing FuzzyAI that the tool has jailbroken every model it tested, pointing to significant flaws across AI adoption, leaving organizations vulnerable as they utilize emerging technologies.
“With the explosion of GenAI adoption and the multitude of models becoming available over the last two years, we knew the industry needed a systematic approach to testing, so we started to work on creating a tool that would do that,” said Eran Shimony, the principal vulnerability researcher at CyberArk.
CyberArk offers a systematic approach to testing AI models
FuzzyAI was built to test the security risks introduced to an organization by adopting AI and GenAI tools to address threats before they wreak havoc. This comes as companies continue to adopt these tools at high rates: IDC forecasts global AI spending will hit $632 billion by 2028. FuzzyAI promises users a more comprehensive and systematic approach to testing than many can do manually.
“There’s always going to more to security than just the model being deployed, but this is important,” said Shai Dvash, a CyberArk labs innovation engineer. “This is more of a tool for developers and DevOps teams who can utilize this within the pipeline of all work and potential new workflows that use a GenAI model. Now, they can test the security of that model as they bring it to the business.”
The key features of FuzzyAI include:
● Comprehensive Fuzzing: FuzzyAI probes AI models with various attack techniques to expose vulnerabilities like bypassing guardrails, information leakage, prompt injection, or harmful output generation.
● An Extensible Framework: Organizations and researchers can add their own attack methods to tailor tests for domain-specific vulnerabilities.
● Community Collaboration: A growing community-driven ecosystem ensures continuous adversarial techniques and defense mechanism advancements.
FuzzyAI also helps address security threats at a time when many organizations are moving quickly to adopt GenAI solutions as they come to market, in some cases without doing the proper security planning before implementing the tool.
“In many cases, when businesses want to go very fast, as they do with the rapid pace of AI adoption, security is what gets left behind,” Shimony said.
FuzzyAI was built as an open-source model and is now available on GitHub
As of December 11, the open-source FuzzyAI is now available through the company’s GitHub page. Additionally, the CyberArk team will run a capture the flag event at Black Hat Europe Arsenal to showcase practical applications of the tool.
“The launch of FuzzyAI underlines CyberArk’s commitment to AI security and helps organizations take a significant step forward in addressing the security issues inherent in the evolving landscape of AI model usage,” said Peretz Regev, chief product officer at CyberArk. “Developed by CyberArk Labs, FuzzyAI has demonstrated the ability to jailbreak every tested AI model. FuzzyAI empowers organizations and researchers to identify weaknesses and actively fortify their AI systems against emerging threats.”
Shimony and Dvash both said the choice to build FuzzyAI as an open-source resource was made both to provide a widely-available solution to a security problem and to source information from a wide community of users.
As new LLMs and GenAI tools come to market almost daily, it is nearly impossible for any one team to keep up with everything, Dvash said. But with the collective testing of what CyberArk hopes to be a large community of users, it will be much easier to stay ahead of the innovation curve.
2024 was a year full of innovation in the AI space. Read Channel Insider’s guide to Generative AI developments and trends to catch up on the most significant announcements of the year.