Innovative cybersecurity framework targets rising insider attacks

Insider cyberattacks remain one of the most persistent and financially damaging threats facing modern organisations. Because these actors have access privileges, their activities are extremely difficult to detect, making insider incidents one of the toughest challenges for cybersecurity teams worldwide.

At Georgia State University in the United States, researchers including Nigerian cybersecurity scholar Olusesi Balogun have developed artificial intelligence–driven security frameworks aimed at detecting and mitigating these high-risk attacks before they escalate.

Balogun’s first major contribution is a deception-enhanced access-control system that embeds cyber-deception mechanisms directly into Attribute-Based Access Control (ABAC), the policy framework that governs user access to sensitive resources. The system introduces components such as a Sensitivity Estimator, a Honey-Attribute Generator, and a deception-integrated monitoring process.

These features work together to identify suspicious activity from credentialed users attempting to probe, map, or manipulate access rules. By incorporating deception at the access policy layer, the design introduces uncertainty for potential malicious insiders while allowing normal operations to continue uninterrupted.

A second system developed by Balogun applies Moving-Target Defense (MTD) principles to enterprise authorisation. Known as MTD-ABAC, the framework periodically shifts critical access-policy parameters, making it significantly harder for attackers to study or exploit system behavior. While MTD techniques are common in advanced cyber defense, they are rarely implemented in authorisation systems, placing this work at the forefront of an emerging area in access-control research.

Balogun’s portfolio also includes Fair-MLBAC, a fairness-aware machine-learning access-control architecture that addresses growing concerns that AI-driven security systems may unintentionally introduce demographic or behavioral bias. The system incorporates fairness constraints, interpretability tools, and robustness checks to help ensure that automated authorization decisions remain transparent, equitable, and resistant to adversarial manipulation.

These research efforts are conducted at Georgia State University’s INformation Security and Privacy: Interdisciplinary Research and Education (INSPIRE) Center, which holds a National Security Agency (NSA) and Department of Homeland Security (DHS) designation as a National Center of Academic Excellence in Cyber Defense Research. Balogun’s work has been integrated into several INSPIRE initiatives and has informed studies in insider-threat modeling, moving-target defense, Zero Trust architectures, mobile-edge security, and deception-based defense across multiple research groups worldwide.

As organisations continue to adopt AI-enabled automation, these developments highlight the increasing need for security systems that balance technical innovation with fairness, robustness, and strategic adaptability.

Emerging frameworks such as those developed at INSPIRE may play a meaningful role in strengthening enterprise resilience, particularly in critical environments where insider misuse carries severe operational and financial risks.

Join Our Channels