Wing Security, a provider of application attack surface protection, is evolving into an AI-first security strategy.
Security posture management platform adds AI risk controls and threat protection
The evolution builds on Wing’s SaaS Security Posture Management (SSPM) capabilities. Its expanded platform secures the application attack surface, spans SaaS, third-party integrations, and AI tools. The platform now integrates AI risk and governance controls, as well as AI-related threat detection, on top of its foundation of continuous discovery, contextual risk assessment, and real-time threat protection. This enables security teams to detect and respond instantly to emerging threats before they can escalate into breaches.
Wing enables organizations that are adopting AI at scale to govern AI usage, reinforce configurations, and detect threats across applications, without slowing innovation.
Wing Security provides:
- Complete visibility across SaaS & AI: Organizations can gain visibility into all applications in use, including shadow IT and shadow AI, with context on vendors, AI capabilities, data access, compliance posture, and breach history. They can also map inter-app connections and data flows to expose risky supply-chain pathways.
- AI-specific configuration guardrails: Best-practice controls can be applied by organizations tailored to AI-enabled applications, along with core posture checks such as SSO, MFA, and least-privilege.
- Real-time threat detection & response: Gain the ability to continuously monitor applications, integrations, and identities to detect malicious or compromised apps, suspicious automation, password spraying, account takeovers, and risky OAuth behavior, including across AI tools. Users can instantly respond to emerging threats by blocking apps, revoking tokens, and remediating misconfigurations.
“Every AI tool, or third-party integration, expands the application attack surface, and attackers are exploiting weaknesses in these apps and trusted connections at an unprecedented rate,” said Galit Lubetzky Sharon, co-founder and CEO of Wing Security. “On top of traditional third-party risk, AI introduces data exposure risk as it may learn from or retain customer data unless configured otherwise. It’s critical to know where that’s happening and apply appropriate security controls. By building AI governance, AI-aware risk assessment and AI threat detection on top of our proven SSPM core, Wing gives organizations the visibility, controls, and response they need to securely embrace AI.”
Q&A with CEO Lubetzky Sharon
Channel Insider was able to catch up with Lubetzky Sharon for a Q&A on what made Wing Security shift its primary focus to AI and how AI is impacting attack surfaces.
The following has been lightly edited for grammar and style. All answers attributed to Sharon.
What motivated Wing Security to make AI the centerpiece of your security strategy now?
The world has been changing in front of our eyes. We’ve seen the rapid pace of AI adoption across our customers, alongside SaaS vendors embedding AI models directly into their platforms. While organizations are dramatically shifting toward using more and more AI-powered applications, almost all delivered as SaaS, Wing has been preparing its platform for the security challenges this shift creates.
Our foundation in SSPM gives us the breadth of visibility, context, the ability to systematically analyze vendors’ data, and advanced analytics needed for addressing AI risks. It means that Wing is uniquely positioned to deliver exactly what companies now need to govern their AI exposure, including: complete visibility into AI usage; clear assessment of AI risks; and improved governance over how these applications and tools are used.
AI regulations and security frameworks are still in the early stages of formation, but it’s already clear that they will have to cover visibility into AI usage and governance over how these applications are used. The EU AI Act makes that clear with three fundamental requirements:
- Organizations must inventory and classify all AI systems in use, including blocking prohibited practices and identifying high-risk ones.
- They must apply the right controls on high-risk systems, such as human oversight, logging, monitoring, and the ability to suspend or report if something goes wrong.
- They must ensure transparency and accountability– from documenting risks and informing affected individuals, to conducting fundamental rights impact assessments when required.
Wing’s platform gives companies a head start: we provide full visibility into all AI applications, map how data flows into them, and flag shadow adoption or unsafe configurations. From there, our governance insights will allow companies to enforce controls: whether restricting access, tightening configurations, or preventing sensitive data exposure. This not only aligns with today’s compliance standards like GDPR, SOC 2, and ISO 27001, but also directly supports the emerging requirements of the AI Act and similar regulations worldwide.
What are you hearing from security teams or partners that made AI risk governance a top priority?
We’re constantly hearing from security teams and partners that the pace of AI adoption has outstripped their ability to govern it. Employees across every department are using AI tools to boost productivity, and SaaS vendors are rapidly embedding AI capabilities into their platforms. On one hand, this unlocks efficiency and innovation; on the other hand, it introduces risks that CISOs can’t afford to ignore– from uncontrolled data exposure, to shadow AI usage, to meeting new regulatory expectations like the EU AI Act. Security leaders tell us they don’t want to slow down the business, but they do need visibility and governance to ensure this adoption is safe. That’s why AI risk governance has become a top priority. It’s about balancing enablement with control, so organizations can embrace AI without compromising compliance or security.
How does Wing Security ensure that security teams can enforce AI guardrails without slowing down business adoption of AI?
Security teams don’t want to slow down AI adoption; they want to make sure it happens safely. Wing helps by giving them clear visibility into where AI is being used and surfacing the gaps or issues that require attention. Instead of creating new processes that add friction, we integrate with the security stack companies already rely on– whether that’s SOAR, SIEM, or other tools– so that AI governance becomes part of existing workflows. This way, teams can enforce guardrails in a way that is natural to how they already operate, enabling the business to continue adopting AI quickly while ensuring risks are addressed effectively.
Can you share a real-world example where Wing Security identified a high-risk AI exposure or threat before it escalated into a breach?
For example, in one case, Wing identified employees using an unvetted generative AI tool to process sensitive records. While it wasn’t yet a full-blown breach, it posed a clever risk of data exposure and a violation of GDPR and internal data-handling requirements. By surfacing this exposure early, the security team was able to intervene, prevent non-compliant data flows, and ensure the organization stayed within its regulatory obligations.
In other cases, we alerted customers about potential AI supply chain risks, for example, when a connected AI vendor suffered a breach. Since these apps often hold sensitive records or are deeply integrated with corporate workflows, such incidents can quickly cascade into data exposure. Wing’s early alerts enabled customers to evaluate the risk, restrict access levels or suspend usage of the compromised AI apps, notify impacted users, and take preventive measures before a real incident unfolded.
Where do you see AI attack surface protection evolving in the next 12-24 months?
Over the next 12-24 months, we expect the AI attack surface to expand dramatically. Organizations are increasingly relying on AI applications for their day-to-day operations, and are starting to embrace AI agents that can act autonomously, integrate across multiple SaaS platforms, and trigger actions on behalf of users. That creates an entirely new layer of exposure, not just how people interact with AI tools, but also how these agents themselves interact with other AI agents and resources, and what access they have.
We believe AI security will require continuous discovery and detection of both AI apps and agents in use, mapping their permissions and data flows, and flagging unusual or risky behaviors before they turn into an incident. So we believe that the future of AI attack surface protection will require extending governance from today’s applications to tomorrow’s autonomous agents.
AI security is becoming a greater concern for all organizations, and providers like Wing Security recognize this need. Read more about another company, Proofpoint, and how they’re working to secure AI workspaces and safeguard data.