Channel Insider content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More
Agentic application security platform Apiiro is debuting AutoFix AI Agent, an industry-first AI agent that automatically fixes design and code risks using runtime context.
Meeting developers where they are through MCP connection
The tool can operate within a developer’s integrated development environment (IDE) without needing plug-ins, using a remote Model Context Protocol (MCP) connection.
“We’re meeting developers where they are– in their IDEs with deep code-to-runtime context– and giving them the secure path forward without slowing them down,” said Moti Gindi, Chief Product Officer at Apiiro. “It’s about empowering developers to fix risks and not vulnerabilities– in real time, with the runtime context, software architecture, and organization policy.”
Apiiro cites the growth of AI coding assistants in the market, such as GitHub Copilot, Gemini Code Assist, and Cursor, as a key reason for developing the tool. Since AI code assistants operate without or with limited context and aren’t governed by existing security tools. This opens the door for more vulnerabilities, unvetted technologies, business logic risks, and code that can bypass organizational security policies and architectural standards.
Risks found in AI-generated code
According to the Center for Security and Emerging Technologies (CSET), a policy research organization within Georgetown University’s Walsh School of Foreign Service, up to 50 percent of AI code assistants generate code containing vulnerabilities. In comparison, 10 percent of those are actively exploitable with true business impact.
CSET states that the ability of large language models (LLMs) and other AI systems to generate computer code poses direct and indirect cybersecurity risks. Among these risks are models generating insecure code, models being vulnerable to attack and manipulation, and downstream cybersecurity impacts, such as feedback loops that affect the training of future AI systems.
CSET found that code generation models require evaluation for security, a task that is currently challenging.
“Evaluation benchmarks for code generation models often focus on the models’ ability to produce functional code, but do not assess their ability to generate secure code, which may incentivize a deprioritization of security over functionality during model training,” the report states.
The AutoFix AI Agent scales expertise across development teams and automatically generates threat models for risky feature requests before any code is written, and fixes SAST, SCA, secrets, and API security findings. The agent leverages a unique runtime context to make precise, risk-based decisions, understanding each organization’s specific software architecture, security policies, business impact, and risk acceptance lifecycle. This enables the tool to deliver autofixes that align with enterprise standards, rather than relying on one-size-fits-all solutions.
“AI code assistants represent one of the most transformative productivity tools of our lifetime. But by focusing solely on code, they lack context– missing critical signals like security policies and standards, compensating controls, and business risk,” said Idan Plotnik, Co-founder and CEO of Apiiro. “This disconnect introduces significant risk to enterprises, as ungoverned AI coding tools are adopted faster than application security teams can keep up. Our AutoFix AI Agent doesn’t just detect issues– it intelligently fixes them using the same contextual understanding and organizational knowledge that application security and risk management teams rely on to make informed decisions.”
The AutoFix AI Agent utilizes unique data generated from Apiiro’s platform that creates a map of software architecture across all material changes, powered by its Deep Code Analysis (DCA), Code-to-Runtime matching, and Risk Graph engine.
Among the core capabilities of the AI Agent are:
- AutoFix, which automatically fixes designs and code risks with runtime context.
- AutoGovern to enforce policies, standards, and secure coding guardrails automatically.
- AutoManage for automating risk lifecycle management measurement across the SDLC.
“In a world where AI generates code, no software should ship without an AI AppSec agent securing it,” said Plotnik. “We’re enabling security teams to unlock full developer productivity while automatically fixing the most critical risks to the business.”
AI is impacting nearly every aspect of the channel, and we’re seeing more channel marketers begin to adopt the technology to equip themselves with the necessary tools and strategies. Read more about the AI webinars, guides, and certifications from the CMA and channelWise geared toward practical AI adoption.