SHARE
Facebook X Pinterest WhatsApp

AI FOMO: How Pressure to Adopt AI is Outpacing Understanding

At Genetec’s Global Press Summit, experts warn AI adoption brings prompt injection, deepfake and alignment risks, urging layered safeguards.

Written By
thumbnail
Jordan Smith
Jordan Smith
Feb 20, 2026
Channel Insider content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

AI – or large language models (LLMs) – is introducing new attack surfaces, despite the new capabilities that the technology promises.  The new threats it is introducing, including prompt injection, deepfakes, and alignment risks, are huge security concerns at a strategic level.

AI FOMO is driving enterprise adoption before risk mitigation

At the Genetec Global Press Summit ‘26 this year, Mathieau Chavelier, Principal Security Architect at Genetec, broke down how organizations are rapidly embedding AI into products and workflows, exacerbated by competitive pressure.

However, the adoption of AI often occurs before fully understanding the associated risks.

“On the enterprise level, there is a huge FOMO – fear of missing out,” Chevalier said. “Companies are rushing to integrate AI in everything they do.”

Advertisement

Prompt injection emerges as a primary enterprise security threat

Among the most concerning threats to security for an organization is the proliferation of prompt injection attacks. Prompt injection enables AI systems to be manipulated into treating malicious user input as system-level instructions.

“Prompt injection is a vulnerability when an attacker manipulates an LLM, causing it to unknowingly execute the attacker’s intention,” said Chevalier. “The cardinal sin: confusing untrusted user input for commands.”

Chevalier explained that this attack strategy mirrors historical vulnerabilities like SQL injection, but is currently harder to fully eliminate due to how modern AI models are built.

Indirect prompt injection is also becoming a significant worry, turning the user into even more of a victim. Through indirect prompt injection, attackers can embed hidden instructions in emails, documents, images, and web content.

For example, in emails, a threat actor could include a prompt in white text at the end of an email that a victim can’t see. The prompt could be giving an AI agent, such as Gemini, hidden instructions which the AI would execute – often without the user realizing it.

Advertisement

Deepfakes create reputational, legal, and compliance exposure

AI is often operating without a ton of safeguards, which can lead AI systems to generate harmful or illegal content. Deepfake generation is an area that presents reputational, legal, and societal risks.

While watermarking and detection exist, these mechanisms can be bypassed – leading to brand damage, misinformation, and regulatory exposure.

“I would say deepfake is an important danger and an important risk that we face as a society today,” said Chevalier. “And what is the main technological defense against that? We do already have digital signage to prove the identity of an image. So we do have some building bocks that might help against that issue.”

Advertisement

AI alignment and ethical conflicts

So what happens when AI optimization conflicts when ethics? According to Chevalier, in stress test, models have demonstrated problematic decision-making.

“The AI decided that blackmailing the CEO in order to not be shut down was the right path,” Chevalier said about a particular AI stress test.

Ultimately, when AI is optimized for business outcomes, it may pursue logically consistent but ethically harmful strategies. Autonomous decision-making that conflicts with organizational values or compliance requirements is a significant risk for enterprises.

“You want alignment, you want security, and you want safety, so what do you do about it?” said Chevalier. “I think the first thing to realize is that an AI system is software, right? It’s not something that is magically coming out of nowhere. If it’s software, it means that classic application security techniques would still work here.”

Advertisement

Traditional cybersecurity controls still apply to AI systems

According to Chevalier, traditional cybersecurity routes that can still have efficacy include: access control, encryption, data protection, and application security best practices.

In addition to these traditional security routes, organizations should consider implementing layered controls, such as:

  • Prompt hardening: Strong system prompt separation.
  • Guardrails/AI firewall: Secondary filtering models.
  • Human-in-the-loop: Human oversight for high-impact actions.
  • Monitoring and logging: Trace analysis and auditability.
  • Red teaming: Continuous adversarial testing.

AI can create a competitive advantage, but only if it is deployed with structured governance, technical safeguards, and executive oversight. 

AI introduces new attack surfaces, and prompt injection is turning into a primary near-term security threat, along with the emergence of deepfakes and alignment risks.

Chevalier demonstrated that responsible deployment requires layered safeguards, not simply blind adoption – stating that: “From now on, we’re going to assume that AI is fallible.”

The Genetec Summit also saw the debut of new investigation capabilities in the Genetec Security Center SaaS. Read more about how the capabilities assist enterprises in incident response resolution and rapidly return organizations to daily operations.

thumbnail
Jordan Smith

Jordan Smith is a news writer who has seven years of experience as a journalist, copywriter, podcaster, and copyeditor. He has worked with both written and audio media formats, contributing to IT publications such as MeriTalk, HCLTech, and Channel Insider, and participating in podcasts and panel moderation for IT events.

Recommended for you...

Study: AI a Priority for Testing Teams Even as Doubt Remains
Victoria Durgin
Feb 19, 2026
Quest Software Debuts Platform to Deliver Trustworthy AI Data
Luis Millares
Feb 17, 2026
Video: Hype, Bubble, or Breakthrough? Talking All Things AI with The Neuron
Katie Bavoso
Feb 12, 2026
Video: SurePath AI CEO Secure GenAI Adoption with Zero Trust
Katie Bavoso
Feb 11, 2026
Channel Insider Logo

Channel Insider combines news and technology recommendations to keep channel partners, value-added resellers, IT solution providers, MSPs, and SaaS providers informed on the changing IT landscape. These resources provide product comparisons, in-depth analysis of vendors, and interviews with subject matter experts to provide vendors with critical information for their operations.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.