SHARE
Facebook X Pinterest WhatsApp

CyberArk Launches FuzzyAI to Test AI Models for Security Risks

CyberArk introduces FuzzyAI, an open-source tool to test AI models for security vulnerabilities. Learn how it helps organizations secure AI adoption.

Dec 12, 2024
Channel Insider content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

Identity security solution vendor CyberArk has launched its new tool designed to test AI models and determine potential security issues before problems arise. CyberArk said in a press release announcing FuzzyAI that the tool has jailbroken every model it tested, pointing to significant flaws across AI adoption, leaving organizations vulnerable as they utilize emerging technologies.

“With the explosion of GenAI adoption and the multitude of models becoming available over the last two years, we knew the industry needed a systematic approach to testing, so we started to work on creating a tool that would do that,” said Eran Shimony, the principal vulnerability researcher at CyberArk.

CyberArk offers a systematic approach to testing AI models

FuzzyAI was built to test the security risks introduced to an organization by adopting AI and GenAI tools to address threats before they wreak havoc. This comes as companies continue to adopt these tools at high rates: IDC forecasts global AI spending will hit $632 billion by 2028. FuzzyAI promises users a more comprehensive and systematic approach to testing than many can do manually.

“There’s always going to more to security than just the model being deployed, but this is important,” said Shai Dvash, a CyberArk labs innovation engineer. “This is more of a tool for developers and DevOps teams who can utilize this within the pipeline of all work and potential new workflows that use a GenAI model. Now, they can test the security of that model as they bring it to the business.”

The key features of FuzzyAI include: 

Comprehensive Fuzzing: FuzzyAI probes AI models with various attack techniques to expose vulnerabilities like bypassing guardrails, information leakage, prompt injection, or harmful output generation. 

An Extensible Framework: Organizations and researchers can add their own attack methods to tailor tests for domain-specific vulnerabilities. 

Community Collaboration: A growing community-driven ecosystem ensures continuous adversarial techniques and defense mechanism advancements. 

FuzzyAI also helps address security threats at a time when many organizations are moving quickly to adopt GenAI solutions as they come to market, in some cases without doing the proper security planning before implementing the tool. 

“In many cases, when businesses want to go very fast, as they do with the rapid pace of AI adoption, security is what gets left behind,” Shimony said.

FuzzyAI was built as an open-source model and is now available on GitHub

As of December 11, the open-source FuzzyAI is now available through the company’s GitHub page. Additionally, the CyberArk team will run a capture the flag event at Black Hat Europe Arsenal to showcase practical applications of the tool. 

“The launch of FuzzyAI underlines CyberArk’s commitment to AI security and helps organizations take a significant step forward in addressing the security issues inherent in the evolving landscape of AI model usage,” said Peretz Regev, chief product officer at CyberArk. “Developed by CyberArk Labs, FuzzyAI has demonstrated the ability to jailbreak every tested AI model. FuzzyAI empowers organizations and researchers to identify weaknesses and actively fortify their AI systems against emerging threats.” 

Shimony and Dvash both said the choice to build FuzzyAI as an open-source resource was made both to provide a widely-available solution to a security problem and to source information from a wide community of users. 

As new LLMs and GenAI tools come to market almost daily, it is nearly impossible for any one team to keep up with everything, Dvash said. But with the collective testing of what CyberArk hopes to be a large community of users, it will be much easier to stay ahead of the innovation curve.

2024 was a year full of innovation in the AI space. Read Channel Insider’s guide to Generative AI developments and trends to catch up on the most significant announcements of the year.

thumbnail Victoria Durgin

Victoria Durgin is a communications professional with several years of experience crafting corporate messaging and brand storytelling in IT channels and cloud marketplaces. She has also driven insightful thought leadership content on industry trends. Now, she oversees the editorial strategy for Channel Insider, focusing on bringing the channel audience the news and analysis they need to run their businesses worldwide.

Recommended for you...

Manny Rivelo on Evolving Channel & How MSPs Can Get Ahead
Victoria Durgin
Aug 20, 2025
Databricks Raises at $100B+ Valuation on AI Momentum
Allison Francis
Aug 20, 2025
Keepit Achieves SOC 2 Type 1 & Canadian Ingram Micro Deal
Jordan Smith
Aug 20, 2025
AI Customer Service Fails to Satisfy Consumer Needs: Verizon
Franklin Okeke
Aug 19, 2025
Channel Insider Logo

Channel Insider combines news and technology recommendations to keep channel partners, value-added resellers, IT solution providers, MSPs, and SaaS providers informed on the changing IT landscape. These resources provide product comparisons, in-depth analysis of vendors, and interviews with subject matter experts to provide vendors with critical information for their operations.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.