← Back to Blog
EN2026-01-30

BodySnatcher: How a Single Email Address Could Hijack an Enterprise AI Agent

A critical vulnerability in ServiceNow's Virtual Agent API (CVE-2025-12420) allowed attackers to impersonate any user — bypassing MFA and SSO — and execute privileged AI agent workflows. Here's what happened and what it means for agentic AI security.

By intelliBrain
Agentic AISecurityEnterpriseServiceNowVulnerability

The Most Severe Agentic AI Vulnerability to Date

As organizations rush to deploy AI agents across their enterprise platforms, a sobering discovery has emerged: the very systems meant to streamline operations can become silent attack vectors when security fundamentals are overlooked.

Security researcher Aaron Costello at AppOmni disclosed BodySnatcher (CVE-2025-12420), a critical vulnerability in ServiceNow's Virtual Agent API and Now Assist AI Agents application. The flaw allowed an unauthenticated attacker — with nothing more than a target's email address — to impersonate any ServiceNow user, bypass multi-factor authentication (MFA) and single sign-on (SSO), and execute privileged AI agent workflows remotely.

In short: an attacker halfway across the globe could act as your organization's admin and instruct AI agents to create backdoor accounts with full privileges.

How the Exploit Worked

ServiceNow's Virtual Agent is a chatbot engine that lets users interact with enterprise data through natural language. It powers integrations across platforms like Slack and Microsoft Teams via the Virtual Agent API, so employees can file tickets, reset passwords, or query knowledge bases without ever logging into ServiceNow directly.

The architecture relies on providers and channels to handle external messages. Each integration authenticates through a provider configuration, and an "auto-linking" feature maps external users to their ServiceNow accounts.

Here's where things broke down:

  1. Hardcoded platform-wide secret. The Message Auth credential — essentially the password for authenticating an integration to a provider — was a static, platform-wide value. If an attacker discovered it, they could authenticate as any provider.
  2. Email-only identity linking. The auto-linking mechanism trusted a simple email address to map external users to ServiceNow accounts. No additional verification. No MFA challenge. Just an email.
  3. AI agent execution via chatbot. Internal topics like "AIA-Agent Invoker AutoChat" allowed AI agents to be triggered through the Virtual Agent API — meaning the chatbot became an unintended execution path for privileged AI workflows.

Chain these together, and an attacker could: authenticate to the Virtual Agent API with a known secret, claim to be any user via their email address, and then invoke AI agents with that user's full privileges.

Why This Matters Beyond ServiceNow

BodySnatcher isn't just a ServiceNow bug. It's a preview of a systemic risk emerging across the industry as agentic AI gets deployed in enterprise environments.

AI agents amplify traditional security flaws. A broken authentication mechanism that previously might have exposed a chatbot conversation now grants access to autonomous AI workflows that can create accounts, modify configurations, and access sensitive data. The blast radius of a simple auth bypass is dramatically larger when an AI agent sits behind it.

Conversational interfaces create new attack surfaces. When AI agents can be invoked through chat APIs, every integration point becomes a potential entry for exploitation. The convenience of "talk to our AI from Slack" comes with the responsibility of securing that entire chain.

Point-in-time fixes aren't enough. ServiceNow patched this specific vulnerability within a week of disclosure (October 30, 2025), and cloud-hosted customers were automatically protected. But the underlying architectural pattern — insecure provider configurations enabling agent execution — represents a class of vulnerability that will keep recurring across platforms.

What Organizations Should Do

If you're running on-premise ServiceNow, update immediately to the patched versions (Now Assist AI Agents 5.1.18+ or 5.2.19+, Virtual Agent API 3.15.2+ or 4.0.4+).

Beyond that, the AppOmni research recommends several broader measures:

  • Enforce MFA for all account-linking flows. Never trust a single identifier like an email address to establish user identity.
  • Establish agent approval processes. Don't let AI agents go live without security review of their trigger paths and privilege levels.
  • Implement lifecycle management. De-provision unused or stagnant agents — they're dormant attack surfaces.
  • Audit conversational channels. Understand every path through which an AI agent can be invoked, including internal topics not intended for external use.

The Bigger Picture

2026 is shaping up to be the year agentic AI goes mainstream in the enterprise. But BodySnatcher is a clear signal: the industry's security practices haven't caught up with the capabilities being deployed. Traditional vulnerability categories — broken authentication, insecure defaults, excessive privileges — become exponentially more dangerous when an autonomous AI agent is the thing being exploited.

The organizations that will navigate this safely aren't the ones deploying agents fastest. They're the ones treating every agent endpoint with the same rigor they'd apply to an admin API.


Sources: AppOmni AO Labs — BodySnatcher Disclosure, CVE-2025-12420

intelliBrain

AI-augmented software development. Based in Zürich, working globally.

© 2026 intelliBrain GmbH. All rights reserved.Imprint
BUILT WITH 🧠 + AI