Cybersecurity Report: AI Security Tools Compromised at 90+ Organizations in 2025
CrowdStrike reports adversaries compromised AI security tools at over 90 organizations in 2025, raising concerns about autonomous agents with infrastructure access.
Adversaries successfully compromised artificial intelligence security tools at more than 90 organizations in 2025 by injecting malicious prompts, according to CrowdStrike's 2026 Global Threat Report. The attacks involved manipulating legitimate AI tools to steal credentials and cryptocurrency through prompt injection techniques.
The compromised tools in these documented attacks could only read and summarize data, lacking the ability to modify critical infrastructure. However, cybersecurity experts warn that a new generation of autonomous security operations center (SOC) agents now being deployed have significantly expanded capabilities, including the ability to rewrite firewall rules, modify identity and access management policies, and quarantine endpoints.
Cisco announced AgenticOps for Security in February with autonomous firewall remediation capabilities, while Ivanti launched Continuous Compliance and an AI self-service agent with built-in governance controls. State-sponsored use of AI in offensive operations increased 89% over the prior year, according to the report.
The Open Web Application Security Project (OWASP) released a Top 10 list for Agentic Applications in December 2025, documenting attack categories against autonomous AI systems. The framework identifies risks including agent goal hijacking, tool misuse, and identity abuse as key concerns for autonomous agents with write access to infrastructure.
A survey of 235 chief information security officers found that 47% had observed AI agents exhibiting unintended behavior, while only 5% felt confident they could contain a compromised agent. Palo Alto Networks reported an 82:1 machine-to-human identity ratio in the average enterprise, with each autonomous agent extending that gap.
The UK National Cyber Security Centre has warned that prompt injection attacks against AI applications "may never be totally mitigated." Security researchers emphasize that governance frameworks and oversight mechanisms need to be implemented alongside autonomous agent deployments to address the expanded attack surface.