Channel Insider content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The emergence of AI in the workplace has introduced incredible opportunities for organizations to grow and improve efficiency. However, the introduction of such an advanced technology also brings about various risks for which organizations must account. 

While AI can be a powerful, transformative tool, it is not infallible and improper usage can open an organization up to various negative consequences. There are legal considerations when using AI and implementing it within your organization, whether that’s utilizing AI for legal document generation, the shifting regulatory landscape, or data management. Additionally, as with any emerging technology, the security posture of your organization must be kept front of mind so that AI does not compromise your services or leave customers vulnerable.

AI security considerations for MSPs

While AI can be a good tool to identify potential security threats, it can also produce false positives or fail to detect more sophisticated threats. Relying on AI to manage security for your organization can result in missing threats or flagging false alarms. It is key for a security expert to verify threats flagged by AI and to recalibrate AI models consistently so that it can learn and produce improved results.

MSPs must also be aware of how AI can speed up the effectiveness of threat actors trying to break into their systems. AI can increase the sophistication of cyber threats and improve the likelihood of successful attacks. According to Egress’ Phishing Threat Trends Report, 71.4% of AI detectors cannot tell whether a phishing email has been written by a chatbot or a human.

In addition to the external risks for an MSP, there are internal risks that can impact their customers significantly. An organization that is not carefully utilizing AI internally may see an increase in privacy concerns and cybersecurity issues that extend to their customers. MSPs should communicate clearly with clients about the potential risks of AI.

Having this dialogue is important because the use of AI is a two-way street. If an MSP’s client is utilizing AI for their business, it’s important for their provider to be aware of its use and potentially provide services to protect the client and, by extension, the MSP. The dialogue should include both MSP and client broaching the topic with their teams as well since a staff member could be using it to automate tasks even if AI tools have not been formally introduced to the business yet.

To protect your organization against AI security risks, conducting continuous monitoring and incident response, ensuring diversity in training data, and implementing data handling and validation are all key best practices.

Legal risks and liabilities to consider when using AI

Beyond the areas to exercise caution with AI, a healthy organization will remain that way so long as they keep potential legal ramifications in mind. AI isn’t a well-regulated space yet, so it’s up to the individual organizations to use it responsibly.

According to the Cornell SC Johnson College of Business, businesses are quickly adopting AI in customer operations, marketing and sales, software engineering, R&D, and other areas, but most are not prioritizing conversations about AI’s legal concerns.

Speaking at the EMBA Alumni Summit at Cornell Tech earlier this year, Garylene Javier, a privacy and cybersecurity associate at Crowell & Moring LLP, said that there are major risks for companies using AI.

“When thinking about incorporating AI in your organization, think about how you can mitigate the risk as you’re developing the system itself,” Javier said. “You can use governance, policies, and cybersecurity to strengthen the business.”

AI can be a resource to generate content, such as blog posts, but it can also help draft contracts, NDAs, and other legal documents. However, if it misses any nuances or legal intricacies, it can leave your organization vulnerable to costly legal mistakes. Legal regulations evolve over time, and AI may not always fully understand the shifting landscape.

Making final submissions with AI-generated content should be avoided. Qualified legal professionals should always keep an eye on legal documents to ensure compliance with local laws and regulatory requirements.

From a liability perspective, service providers have to consider what information is being introduced to AI models, especially if they don’t own the model, says EchoStor CIO Daniel Clydesdale-Cotter.

“You start to get into the legal ramifications of AI if you don’t have strong multi-tenancy and you’re not adhering to the compliance frameworks that are required for that type of data,” he said. “Whether it’s financial health records, credit card transactions, or [personally identifiable information], there’s obviously a lot that can start to get dangerous from an MSPs perspective,” Clydesdale-Cotter said.

Courts are still sorting out how current laws should be applied when it comes to AI and potential copyright, patent, and trademark infringement, so staying on top of laws and regulations is a key best practice for organizations. New legislation regarding AI use is undoubtedly on the horizon. 

As AI continues to have a presence in the workplace, it is important to be mindful of the fact that AI is not just an IT problem, but an enterprise-wide factor from the C-suite on down. MSPs and other organizations looking to be successful while utilizing AI must implement frameworks to ensure their AI systems contain policies and boundaries, along with human monitoring to confirm results and meet the organization’s established standards and ethics.

Read more about how MSPs can safely and securely help their clients meet their goals with our guide to AI managed services.

Subscribe for updates!

This field is required This field is required