Channel Insider content and product recommendations are editorially independent. We may make money when you click on links to our partners. View our editorial policy here.

CTERA, a provider of enterprise data services, has announced native support for the Model Context Protocol (MCP), a new open standard developed by Anthropic that enables AI models to access enterprise data and applications securely.

CTERA promotes secure AI integrations as demand increases

CTERA is the first hybrid cloud platform to embed an MCP Server for secure AI integration, allowing enterprises to connect large language models (LLMs), such as assistants like Claude, AI IDEs, and internally developed agents, directly to private data, without compromising security or compliance.

MCP provides a structured, permission-aware interface when previously connecting LLMs to private files, which previously meant relinquishing control or building custom integrations.

“This launch is a major step toward an agentic enterprise where LLM-based assistants work seamlessly with an organization’s internal data,” said Aron Brand, CTO of CTERA. “We’re giving their teams a secure and intelligent way to enable real-time decisions, faster workflows, and new kinds of automation without introducing security and compliance challenges to the business.”

CTERA has embedded MCP into the CTERA Intelligent Data Platform, allowing users to look up, summarize, retrieve, manage, and create files using natural language, while enabling IT and security teams to maintain complete control over access, auditing, and encryption.

This integration allows users to automate routine file management tasks and eliminate tedious navigation or coding skills, allowing users to leverage AI-driven actions to drive productivity efficiencies. MCP connects LLMs with enterprise systems to facilitate advanced capabilities.

Looking closer at MCP

MCP is a new open standard designed to let AI assistants and agents securely interact with enterprise systems through natural language. 

MCP enables models to query systems such as file storage, CRM, ticketing, or analytics; perform user-authorized actions; and maintain session context across tools and tasks.

“As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality,” Anthropic notes. “Yet even the most sophisticated models are constrained by their isolation from data– trapped behind information silos and legacy systems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale.”

The protocol enables organizations to gain scalable, secure, and model-agnostic ways to empower AI assistants with access to existing tools and data, without compromising security or control.
AI security has been growing in frequency for the channel in recent months. Learn more about Snyk Labs, which recently announced its Snyk Studio platform, a tool to allow AI-native partners to integrate Snyk’s capabilities into their coding assistant tools through Snyk’s MCP server.

Subscribe for updates!

You must input a valid work email address.
You must agree to our terms.