Artificial intelligence now powers many cyber defense tools, from malware detection to intrusion monitoring. But attackers have learned how to manipulate those systems, subtly altering malicious code to slip past AI defenses. A new framework aims to close that gap.
Research co-authored by Reza Ebrahimi, assistant professor at the University of South Florida, introduces RADAR, short for Reinforcement learning-based ADversarial Attack Robustness. The study appears in MIS Quarterly.
RADAR strengthens AI-powered cyber defense by modeling security as an ongoing game between attackers and defenders. In the first phase, the system uses deep reinforcement learning to simulate realistic, step-by-step adversarial attacks, essentially training the AI to anticipate how hackers might try to evade detection. In the second phase, those simulated attacks are used to retrain and harden defensive models.
The researchers tested RADAR on malware detection systems, a major source of financial losses from cyberattacks. Across three widely used malware detectors, adding RADAR improved adversarial robustness by as much as sevenfold compared with existing approaches.
Beyond boosting performance, the framework also offers insight into attacker behavior, helping system designers anticipate evolving threats.
Authors: Reza Ebrahimi, University of South Florida; Yidong Chai, Hefei University of Technology and City University of Hong Kong; Weifeng Li, University of Georgia; Jason Pacheco, University of Arizona; Hsinchun Chen, University of Arizona Tucson.

