SHARE
Facebook X Pinterest WhatsApp

API Security Risks and AI Threats: Expert Insights

Myriad360’s Field CISO shares how partners can tackle API security gaps and evolving AI-driven threats to keep organizations resilient and secure.

Oct 6, 2025
Channel Insider content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

Jeremy Ventura is the field CISO at global systems integrator Myriad360. In this Q&A with Channel Insider, he shares his insights on API security needs and how partners and their clients can remain secure in the AI era.

This Q&A was lightly edited for grammar and style.

API risks: forgotten connections, sprawling use cases, and more

Are APIs still a lesser-known (and lesser-addressed) threat vector for many organizations?

Unfortunately, yes! In many organizations, APIs are still treated as an afterthought or “just another endpoint,” rather than a first-class security domain. Attackers increasingly string together reconnaissance, evasion, and logic abuse against APIs to create long-play attacks. Too often, security tooling is built around web apps and UIs, not the machine-to-machine interfaces that drive modern architectures. Until more organizations understand that APIs are one of the primary access planes for data and functionality, they’ll continue to be under-protected.

One common manifestation of this blind spot is “zombie APIs:” forgotten services that are unmaintained but still live, providing a backdoor. Recently, I’ve seen clients have up to eight times more APIs than they initially believed. That kind of drift is a natural byproduct of evolving microservices, serverless architectures, integration sprawl, and the rapid pace of delivery. The mismatch between inventory, visibility, and protective controls is still a big gap in many enterprises.

Why API security and AI adoption are complementary challenges

What do AppSec teams need to keep top of mind as we enter an AI-enabled security landscape?

First, threat models must evolve. The assumption that attackers act slowly, manually, or with limited compute no longer holds. In a world where intelligent agents can automate reconnaissance, payload generation, fuzzing, or adaptive attacks, AppSec teams need to shift from static defenses to adaptive, risk-based controls. With the rise of generative AI, it is enabling adversaries to scale and speed up tasks we once considered “hard” or tedious. 

Simply, transparency and interpretability will matter more than ever. As we embed ML/AI into security tooling and into applications themselves (for example, for anomaly detection or behavior scoring), teams need guardrails to avoid false positives, bias, poisoning attacks, or drift. Collaboration across Dev, Sec, and Data/ML groups is nonnegotiable. AI introduces new dependencies (models, data pipelines, feature engineering) that weren’t always within the app’s traditional security boundary. Security must be embedded throughout, not bolted on.

How do we balance the excitement around AI with the reality that it enables threat actors to build more efficient workflows of their own?

AI is the dual-edged sword of our time. The same advances that help defenders, such as faster detection, anomaly scoring, and behavioral baselining, also lower the bar for attackers to assemble “attacks as a service” toolkits, adapt payloads, or orchestrate multi-stage campaigns with less human effort. Defenders should focus less on trying to match adversaries tool-for-tool and more on how we enable AI to accelerate how we respond to threats. 

I constantly advise that we must not confuse hype with readiness. The excitement should not lull us into thinking AI defenses alone will suffice; we still need solid fundamentals (rate limiting, strong identity, zero trust, runtime monitoring).

Security work without additional burden: breaking down the siloes

How can orgs better protect their systems without burdening dev teams or individual security experts with extra work?

One of the biggest lessons I’ve learned throughout my career is that security and development don’t succeed in silos; they succeed when they move in lockstep. Protecting systems without creating friction starts with building trust and shared ownership between the two teams. When developers feel like security is a “blocker” rather than something they’re part of, it creates unnecessary tension. Instead, the focus should be on embedding security into the culture and conversations, not just the code.

Two best practices I’ve seen work well:

  • Security liaisons or “champions” inside development teams. This model empowers certain developers to be the go-to voice for security questions and ensures context flows both ways. It helps developers feel represented in security discussions while also giving AppSec teams a natural inroad to influence decisions earlier in the lifecycle.
  • Informal knowledge-sharing sessions, like “lunch and learns”. Creating a casual, low-pressure forum where developers and security can walk through recent threats, lessons learned, or even just demo a new tool helps normalize security as part of everyday development. It’s not about training, it’s about consistent, approachable collaboration.

Read all of our latest security coverage, including news, trends, analysis, and more, right in your inbox every week by signing up for the Channel Insider newsletter.

thumbnail Victoria Durgin

Victoria Durgin is a communications professional with several years of experience crafting corporate messaging and brand storytelling in IT channels and cloud marketplaces. She has also driven insightful thought leadership content on industry trends. Now, she oversees the editorial strategy for Channel Insider, focusing on bringing the channel audience the news and analysis they need to run their businesses worldwide.

Recommended for you...

11:11 Systems Study Shows Security Concerns Worldwide
OPSWAT Launches MetaDefender Drive for Handheld Security
Jordan Smith
Oct 7, 2025
Delinea Exec: Identity Security Crucial to Federal Agencies
Wing Security CEO on Why AI Risk Governance is Top Priority
Jordan Smith
Oct 1, 2025
Channel Insider Logo

Channel Insider combines news and technology recommendations to keep channel partners, value-added resellers, IT solution providers, MSPs, and SaaS providers informed on the changing IT landscape. These resources provide product comparisons, in-depth analysis of vendors, and interviews with subject matter experts to provide vendors with critical information for their operations.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.