ServiceNow Assist AI agents exposed to coordinated attack
0
0

A new exploit in ServiceNow’s Now Assist platform can allow malicious actors to manipulate its AI agents into performing unauthorized actions, as detailed by SaaS security firm AppOmni.
Default configurations in the software, which enable agents to discover and collaborate with one another, can be weaponized to launch prompt injection attacks far beyond a single malicious input, says chief of SaaS Security at AppOmni, Aaron Costello.
The flaw allows an adversary to seed a hidden instruction inside data fields that an agent later reads, which may quietly enlist the help of other agents on the same ServiceNow team, setting off a chain reaction that can lead to data theft or privilege escalation.
Costello explained the scenario as “second-order prompt injection,” where the attack emerges when the AI processes information from another part of the system.
“This discovery is alarming because it isn’t a bug in the AI; it’s expected behavior as defined by certain default configuration options,” he noted on AppOmni’s blog published Wednesday.
ServiceNow Assist AI agents exposed to coordinated attack
Per Costello’s investigations cited in the blog, many organizations deploying Now Assist may be unaware that their agents are grouped into teams and set to discover each other automatically to perform a seemingly “harmless task” that can expand into a coordinated attack.
“When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems,” he said.
One of Now Assist’s selling points is its ability to coordinate agents without a developer’s input to merge them into a single workflow. This architecture sees several agents with different specialties collaborate if one cannot complete a task on its own.
For agents to work together behind the scenes, the platform requires three elements. First, the underlying large language model must support agent discovery, a capability already integrated into both the default Now LLM and the Azure OpenAI LLM.
Second, the agents must belong to the same team, something that occurs automatically when they are deployed to environments such as the default Virtual Agent experience or the Now Assist Developer panel. Lastly, the agents must be marked as “discoverable,” which also happens automatically when they are published to a channel.
Once these conditions are satisfied, the AiA ReAct Engine routes information and delegates tasks among agents, operating like a manager directing subordinates. Meanwhile, the Orchestrator performs discovery functions and identifies which agent is best suited to take on a task.
It only searches among discoverable agents within the team, sometimes even more than administrators realize. This interconnected architecture becomes vulnerable when any agent is configured to read data not directly submitted by the user initiating the request.
“When the agent later processes the data as part of a normal operation, it may unknowingly recruit other agents to perform functions such as copying sensitive data, altering records, or escalating access levels,” Costello surmised.
AI agent attack can escalate privileges to breach accounts
AppOmni found that Now Assist agents inherit permissions and act under the authority of the user who initiated the workflow. A low-level attacker can plant a harmful prompt that gets activated during the workflow of a more privileged employee, getting access without ever breaching their account.
“Because AI agents operate through chains of decisions and collaboration, the injected prompt can reach deeper into corporate systems than administrators expect,” AppOmni’s analysis read.
AppOmni said that attackers can redirect tasks that appear benign to an untrained agent but become harmful once other agents amplify the instruction through their specialized capabilities.
The company warned that this dynamic creates opportunities for adversaries to exfiltrate data without raising suspicion. “If organizations aren’t closely examining their configurations, they’re likely already at risk,” Costello reiterated.
LLM developer Perplexity, said in an early November blog post that novel attack vectors have broadened the pool of potential exploits.
“For the first time in decades, we’re seeing new and novel attack vectors that can come from anywhere,” the company wrote.
Software engineer Marti Jorda Roca of NeuralTrust said the public must understand that “there are specific dangers using AI in the security sense.”
Join a premium crypto trading community free for 30 days - normally $100/mo.
0
0
Securely connect the portfolio you’re using to start.





