Channel Insider content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More
Apiiro has expanded its AI Bill of Materials (AI BOM) to encompass the detection of AI agents and Model Context Protocol (MCP) servers, extending the company’s Deep Code Analysis (DCA) to treat AI resources as first-class elements within the software graph.
What’s new: expanded AI BOM detects agents, MCP servers, and embedded models
The vendor says the update recognizes a rapid shift in development practices: engineering teams are moving beyond experimentation with generative AI frameworks to building autonomous agents, deploying MCP servers, and embedding models directly into production code. Apiiro contends these changes introduce novel application security risks (from insecure inputs and outputs to secrets exposure and data leakage) that traditional, siloed scanners often miss.
How it works: unified software graph that maps AI artifacts
Apiiro’s approach keeps AI artifacts visible within a unified software architecture graph rather than as isolated findings. The platform specifically calls out:
- Autonomous or semi-autonomous AI agents
- GenAI frameworks, including LangChain and OpenAI SDKs
- MCP servers
- AI-related secrets
- Model and dataset files, including .pt, .onnx, .ann, and similar formats
By mapping those resources alongside other software components, Apiiro says its DCA can correlate AI risks with signals from a range of security tools, including:
- SAST (static application security testing)
- SCA (software composition analysis)
- DAST (dynamic application security testing)
- CSPM (cloud security posture management)
- API security tools
The vendor states that DCA normalizes and deduplicates results while enriching findings with runtime and business context to enhance triage and minimize noise.
Why correlation matters
Correlation is designed to reduce noisy alerts and identify compound risks that individual tools may overlook. Apiiro illustrates this with an example: usage of the Hugging Face Python client inside a service that also exposes a sensitive API appears minor in isolation; when correlated across the software graph with API security findings and SAST results, the combined issue becomes more serious and is scored accordingly. The goal is to achieve better risk prioritization for AppSec and DevOps teams.
Traceability, inventory, policy enforcement, and other practical uses
Apiiro emphasizes traceability by integrating with configuration management databases such as ServiceNow to link discovered AI components to business services, assign named owners for discovered components, support governance and remediation workflows, and other critical tasks.
According to the announcement, other immediate uses for developers and AppSec teams include:
- Inventorying AI usage across custom and open-source code
- Assessing exposure and reachability from external inputs
- Enforcing governance policies on frameworks and models
- Conducting AI-specific threat modeling (e.g., prompt injection, insecure outputs)
- Tying findings to review and remediation workflows
These capabilities aim to shorten detection-to-remediation cycles and make AI-related risks actionable within existing DevSecOps processes.
AI charting similar path and showing same issues as previous innovations
AI is not the first new technology impacting how organizations approach security. As Apiiro highlights in its announcement, the onset of cloud computing also expanded the risk landscape for businesses everywhere.
“Organizations that chased technology-specific scanners ended up with fragmented tools and unsustainable backlogs. Those that modeled the entire graph of software risk, and governed it holistically, were able to scale,” the company’s official statement reads.
While Apiiro is helping tech leaders navigate how to take the best practices learned in the early cloud years and apply them to AI, partners are also starting to reflect on business model shifts in both eras to build a stronger approach to the AI-enabled future.