Adversarial Robustness in AI Authentication Systems: Mitigating Threat Vectors Through Gradient Masking and Ensemble Defense
DOI:
https://doi.org/10.53469/jrse.2025.07(08).14Keywords:
Adversarial attacks, Artificial Intelligence, Authentication systems, Cybersecurity, AI vulnerabilityAbstract
Adversarial attacks on Artificial Intelligence (AI) and authentication systems have become significant cybersecurity threats, exploiting vulnerabilities in machine learning models and biometric protocols. This paper explores adversarial techniques such as spoofing and perturbation attacks, their impact on AI decision-making, and effective defense strategies like adversarial training and input transformation. Addressing these challenges is critical [1], as there is a growing reliance on AI in sensitive areas like finance and healthcare. The study underscores the need for continued innovation in defensive strategies to mitigate evolving adversarial threats. It also highlights the importance of addressing adversarial attacks to ensure the security and reliability of AI-driven systems in sensitive sectors.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Abdel Hady

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Deprecated: json_decode(): Passing null to parameter #1 ($json) of type string is deprecated in /www/bryanhousepub/ojs/plugins/generic/citations/CitationsPlugin.inc.php on line 49

