Document Type
Article
Publication Date
12-31-2025
DOI Link
Abstract
This comparative study examines patterns of Large Language Model (LLM) weaponization through systematic analysis of four major exploitation incidents spanning from 2023-2025. While existing research focuses on isolated incidents or theoretical vulnerabilities, this study provides one of the first comprehensive comparative frameworks analyzing exploitation patterns across state-sponsored cyber-espionage (Anthropic Claude incident), academic security research (GPT- 4 autonomous privilege escalation), social engineering platforms (SpearBot phishing framework), and underground criminal commoditization (WormGPT/FraudGPT ecosystem). Through comparative analysis across eight dimensions: Adversary sophistication, target selection, exploitation techniques, autonomy levels, detection evasion, attribution challenges, defensive gaps, and capability democratization, this research identifies critical cross-case patterns informing defensive prioritization. Findings reveal three universal exploitation mechanisms transcending adversary types: autonomous goal decomposition via chain-of-thought reasoning (present in all four cases), dynamic tool invocation and code generation (3/4 cases), and adaptive social engineering (4/4 cases). Analysis demonstrates progressive capability democratization: state-level sophistication (Claude: 80-90% autonomy) transitioning to academic accessibility (GPT-4: 33-83% success rates), specialized criminal tooling (SpearBot: generative-critique architecture), and mass commoditization (WormGPT: $200-1700/year subscriptions). Comparative findings identify four cross-cutting defensive imperatives applicable regardless of adversary type: multi-turn conversational context monitoring, behavioral fingerprinting distinguishing legitimate from malicious complex workflows, federated threat intelligence enabling rapid cross-organizational learning, and capability-based access controls proportional to LLM reasoning sophistication.
Publication
International Journal of Academic Studies in Science and Education (IJASSE)
Publisher
International Society for Academic Research in Science, Technology, and Education (ARSTE)
Volume
3
Issue
2
Pages
125-146
Department
College of Business and Management
Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.
Peer Reviewed
1
Publication History
Received: 11 July 2025 | Accepted: 28 December 2025
Recommended Citation
Antoniou, G. (2025). Patterns of LLM weaponization: A comparative analysis of exploitation incidents across commercial AI systems. International Journal of Academic Studies in Science and Education (IJASSE), 3(2), 125–146. https://doi.org/10.55549/ijasse.50
Comments
SDG alignment:
• SDG 4 – Quality Education (ethical AI literacy and workforce preparedness)
• SDG 9 – Industry, Innovation, and Infrastructure (secure and responsible AI systems)
• SDG 16 – Peace, Justice, and Strong Institutions (mitigating misuse and weaponization of AI technologies)