| Conference | 7th International Conference on Wireless, Intelligent and Distributed Environment for Communication, WIDECOM 2024, October 16 - 18, 2024, Keene, New Hampshire, United States |
|---|
| Abstract | Machine learning (ML) systems are essential in applications from autonomous vehicles to financial forecasting, highlighting the need for robust security against adversarial threats. Despite advancements in robust ML models, vulnerabilities to attacks exploiting system weaknesses remain significant. These vulnerabilities pose serious risks, especially in security-sensitive areas, necessitating the exploration of both theoretical and practical aspects of these threats. This study evaluates the gap between theoretical adversarial attacks on ML models and their practical applicability. By examining a range of ML classifiers, including decision trees, support vector machines, convolutional neural networks, long short-term memory networks, and multilayer perceptrons across diverse datasets, we explore how intrinsic properties within datasets and model architectures influence the effectiveness of adversarial attacks. Utilizing methods such as the fast gradient sign method (FGSM), basic iterative method (BIM), Jacobian-based saliency map attack (JSMA), Carlini & Wagner (C&W), transferability, label flipping, and feature collision, our investigation reveals a disparity between theoretical and practical success. Factors such as attack complexity, attacker knowledge, and resource constraints shape this disparity. The results highlight the importance of creating strong defenses that address the specific weaknesses of different machine learning models. Our research takes a fresh look at how feasible adversarial attacks really are, moving from just theoretical concerns to focusing on what is practical. This provides a clearer understanding of how to make models more resilient in real-world situations, where attackers face more limitations. This approach lays the foundation for improving the security of machine learning systems in the future. |
|---|