Prompt-Injection Supply-Chain Vector Into Agent Tool Execution
Sources: 1 • Confidence: High • Updated: 2026-03-25 17:54
Key takeaways
- A PromptArmor report described a prompt-injection attack chain against Snowflake's Cortex Agent, and the report states the issue has since been fixed.
- The PromptArmor writeup portrays command-pattern allow-lists for agent tools as inherently unreliable and not trustworthy as a primary safety mechanism.
- The PromptArmor writeup positions deterministic sandboxes implemented outside the agent layer as a key mitigation area to watch for preventing similar command-execution bypasses.
- In the described attack chain, the initial vector was an agent request to review a GitHub repository whose README contained a hidden prompt injection at the bottom.
- In the described chain, the prompt injection led the agent to execute a shell command that fetched and ran attacker-hosted content using process substitution.
Sections
Prompt-Injection Supply-Chain Vector Into Agent Tool Execution
- A PromptArmor report described a prompt-injection attack chain against Snowflake's Cortex Agent, and the report states the issue has since been fixed.
- In the described attack chain, the initial vector was an agent request to review a GitHub repository whose README contained a hidden prompt injection at the bottom.
- In the described chain, the prompt injection led the agent to execute a shell command that fetched and ran attacker-hosted content using process substitution.
Allow-List Control Gap Via Shell Features (Process Substitution)
- The PromptArmor writeup portrays command-pattern allow-lists for agent tools as inherently unreliable and not trustworthy as a primary safety mechanism.
- In the described chain, the prompt injection led the agent to execute a shell command that fetched and ran attacker-hosted content using process substitution.
- Cortex treated "cat" commands as safe to run without human approval, and this safety approach failed to account for process substitution embedded within the command body.
Threat-Model Update: Treat Agent Runtime As Fully Capable Process; Mitigate With Isolation
- The PromptArmor writeup positions deterministic sandboxes implemented outside the agent layer as a key mitigation area to watch for preventing similar command-execution bypasses.
- The PromptArmor writeup states that agent-executed commands should be treated as capable of doing anything the underlying process is permitted to do.
Watchlist
- The PromptArmor writeup positions deterministic sandboxes implemented outside the agent layer as a key mitigation area to watch for preventing similar command-execution bypasses.
Unknowns
- What was the concrete impact scope of the reported Cortex Agent chain (e.g., whether it was exploited beyond a demonstration, and what data/actions were reachable)?
- What exact remediation was applied to claim the chain was fixed (e.g., tool-runner changes, shell removal, parsing hardening, permission reductions, sandboxing)?
- What shell/tool execution path was used (shell type, invocation flags, whether commands were executed via a shell at all), and how was process substitution enabled?
- What were the agent runtime’s effective privileges at the time (network egress, access to credentials, accessible datasets, filesystem write permissions)?
- Are there independent retests or follow-on reports validating that the specific bypass and closely related shell-metasyntax variants are now blocked?