Cybersecurity across the channel continues to shift fast. Whether it’s new AI-driven security risks popping up, threat actors modernizing their campaigns, or defense teams trying to keep pace, it seems like every month (or quarter) brings something new in terms of security.
As we start the year, it’s worth getting a clearer view of what the latest research is signaling, which trends are accelerating, and what those signals mean for MSPs and the customers they support.
In this rundown, we’ll highlight what recent security research is saying, call out a few major trends to watch, and translate it all into what it means for the channel industry as a whole.
AI-driven security risks are accelerating across the channel
As expected, many of the latest security reports across the channel focus on AI and the risks it poses. As AI adoption has become more widespread, more organizations are also encountering the downsides of integrating emerging tech into their day-to-day stacks and workflows.
Chatbot integrations introduce new data exposure risks
Radware’s ZombieAgent research, for example, uncovered concerning data vulnerabilities tied to ChatGPT. According to the company, new weaknesses in ChatGPT could allow an attacker to exploit it to exfiltrate sensitive or personal information.
“The attacker can leak personal data from systems connected to ChatGPT—such as Gmail, Outlook, Google Drive, or GitHub—as well as leak sensitive information from the user’s chat history or personal memories stored inside ChatGPT,” Radware said.
They also uncovered methods for persistence that could allow threat actors to continuously exfiltrate conversations between the user and ChatGPT.
This is particularly concerning in an MSP and corporate context, especially for organizations that have begun integrating ChatGPT and similar LLMs into everyday workflows.
If those tools are connected to business systems, they can also serve as a pathway to sensitive data that would not normally be exposed through a single compromised account.
AI identities and “shadow privilege” expand attack surfaces
Aside from chatbot-related vulnerabilities, AI identities have also become a prime risk factor.
According to new CyberArk research, only 1 percent of organizations have fully adopted Just-in-Time (JIT) privileged access even as AI-driven identities ramp up.
At the same time, 91 percent report that at least half of their privileged access is always-on, and 33 percent say they lack clear AI access policies.
JIT access is a security principle that limits access only when it’s needed and for predetermined periods of time. This model is a strong counterbalance to the fast-paced, overprivileged nature of AI, since access can quickly get out of control with automation.
CyberArk says this illustrates the growing issue of “shadow privilege,” in which unknown or unnecessary privileged accounts accumulate over time and quietly expand an organization’s attack surface.
API security emerges as a foundation for agentic AI trust
Another big talking point this year is agentic AI, and how organizations can prove they’re securing it properly.
Salt Security recently released a report highlighting growing apprehension around deploying agentic AI, especially for external communications, as consumers are wary of sharing personal information if they don’t trust how data is handled.
The study also highlighted that, because APIs power AI agents, API security will be a major avenue for providers to improve confidence in agentic AI interactions and agentic AI overall.
“Agentic AI is changing the way businesses operate, but consumers are clearly signalling a lack of confidence,” said Michael Callahan, chief marketing officer at Salt Security.
“What many organisations overlook is that the safety and success of AI depends on APIs that power it and they must be effectively discovered, governed and secured. Otherwise, the trust gap will widen, and the risks will escalate.”
Phishing-as-a-Service on the rise
Alongside AI-driven threats, phishing continued to be a concerning trend.
In 2025, the number of phishing kits doubled, according to research from Barracuda Networks. Barracuda threat analysts found that:
- The number of known phishing kits doubled in 2025.
- New phishing kits are increasingly sophisticated, evasive, and stealthy.
- MFA bypass techniques, URL obfuscation, and CAPTCHA abuse were observed in roughly half of all attacks.
- Traditional phishing scams and kits continue to thrive through constant innovation — for example, Barracuda noted 10 million Mamba 2FA attacks in late 2025.
Barracuda also reported that threat actors relied on familiar lures such as fake payment, financial, legal, digital signature, and HR-themed messages.
These are designed to trick users into clicking on malicious links or opening attachments that expose sensitive information.
Notably, they highlighted widespread spoofing of trusted brands like Microsoft, DocuSign, and SharePoint, with attackers increasingly leveraging AI to generate more convincing phishing emails and social engineering scams.
Data visibility gaps create compliance and security risk
Lastly, private data network company Kiteworks recently published research highlighting a growing problem for security and compliance teams — many organizations can’t clearly answer where their data lives or how they would prove it if asked.
Based on a survey of 225 security, IT, and compliance professionals across 10 industries and eight regions, only 36 percent of organizations said they have visibility into where their data is processed, trained, or inferred by external partners.
Meanwhile:
- 61 percent reported having fragmented audit trails that can’t produce evidence-quality documentation.
- 57 percent lack the centralized data gateways needed to track, control, and prove data flows across their environment.
“Organizations have spent years building governance frameworks on paper. Now they’re being asked to prove those frameworks work—and most can’t,” said Tim Freestone, chief strategy officer at Kiteworks.
“When a regulator asks where customer data was processed, when a board asks how AI systems are accessing sensitive information, when a sovereignty audit demands proof of data residency—nearly two-thirds of organizations will struggle to produce a clean answer. That’s not a technology gap. It’s an accountability gap.”
What this research means for MSPs in 2026
While the reports above are by no means the end-all, be-all of security this year, they do point to some clear directions for how the channel can move forward and find opportunities for everyone involved.
Modernizing defenses for AI-driven threats
First and foremost, MSPs must continue to prioritize modernizing their defenses to keep pace with AI-driven threats.
Whether that means adopting AI capabilities themselves or building stronger governance frameworks, organizations should be deliberate in implementing an AI-first defense strategy.
It’s also worth acknowledging that AI cuts both ways. It lowers the barrier to entry for less experienced attackers, but it also gives advanced threat groups more ways to scale their operations, making this approach all the more important.
In other words, MSPs should be thinking of AI-driven threats as the “new normal” to keep their clients secure.
Why security hygiene and user training still matter
At the same time, it’s equally important to complement new security tooling with the day-to-day habits that reduce risk at the ground level.
Security awareness, user training, and practical policy enforcement still matter, especially as phishing remains one of the most consistent entry points for attackers.
As Barracuda’s phishing research illustrates, enterprise users remain a primary target, and AI has only made modern scams harder to spot.
For MSPs, that means security hygiene and cybersecurity education have to remain a core part of how partners support customers.
In a more specific case, LLM hygiene is another opportunity. MSPs that can help customers define guardrails around LLM usage and data exposure will be in a stronger position to reduce risk before it turns into a real incident.
Data governance becomes a core MSP service opportunity
Finally, these findings reinforce how central data security has become as AI becomes more commonplace across the channel.
As organizations integrate AI into everyday workflows, data is moving more freely across enterprise environments. That creates more pathways for sensitive information to be exposed and, in turn, exploited by bad actors.
For MSPs, this also opens the door to more data-specific security services that align directly with customer needs.
Helping customers map data flows, centralize audit evidence, and enforce policies around AI and third-party access are just a few of the core elements of a stronger security posture that MSPs can support customers with.