AI-Run Cyber Espionage Is Here — And Boards Must Prepare Before the Tools Exist
In November 2025, Anthropic confirmed the first known case of a cyber espionage campaign executed primarily by autonomous AI agents. The attack, linked to the Chinese state-backed group GTG-1002, was not simply AI-assisted—it was AI-operated.
This is the turning point security leaders have warned about: an era where attackers don’t just scale with automation, they scale with autonomy.
GTG-1002 built its framework around Claude Code and open-source penetration tools. Humans provided only target selection and high-level approvals. From there, the AI handled nearly everything else: reconnaissance, vulnerability discovery, exploit creation, credential testing, lateral movement, data extraction, and full operational documentation.
Anthropic estimates that 80–90% of the campaign was conducted autonomously, at a speed no human team can match. Even more concerning: each AI action looked like a harmless administrative task. Traditional anti-malware and behavioral tools—built to detect human-driven patterns—never stood a chance.
This creates a new governance reality for boards: AI attacks have arrived before AI defenses are mature.
What Boards Can Realistically Do Now
While fully autonomous defensive platforms don’t yet exist, boards are not powerless. Governance actions do not require mature tools—they require direction, discipline, and urgency.
1. Put “AI-Driven Intrusion” on the Enterprise Risk Register
This isn’t a future scenario. It’s live. Boards must treat it as a top-tier strategic risk.
2. Require a 12–24 Month AI Defense Roadmap
Since agentic ai attacks are new, new solutions are not obvious, so a plan is needed. Boards should ensure that management performs:
a gap assessment
a vendor scan
pilot initiatives
a budget aligned with AI-speed threats
3. Mandate Continuous AI-Based Controls Testing
Even early-stage AI red-team tools can find vulnerabilities faster than human testers. Boards should ensure that management has the capabilities and support to employ these tools and build skills and capabilities in this area
4. Expand Cyber Exercises to Include AI-Autonomous Attack Scenarios
Most tabletop exercises assume human attackers. That assumption is now obsolete.
5. Require Oversight of AI Usage Inside the Enterprise
The GTG-1002 attack began by social-engineering the AI system itself.
Boards should require:
AI access governance
logging of high-privilege AI activity
guardrails for agentic workflows
Bottom Line
AI-enabled attacks are moving faster than the defensive market—but governance can move faster still. Boards don’t need perfect tools to govern a new threat. They need clarity, pressure, and foresight.