Authors:
Balika J. Chelliah, T. K. Harikishan, Priyanga Durairaj, G. Manoj Kumar
Addresses:
Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram, Chennai, Tamil Nadu, India.
The increasing use of sentiment analysis in real-world applications, such as product recommendations and opinion-based analysis, has raised concerns about the susceptibility of deep neural network (DNN)-based sentiment classification systems to adversarial attacks. Adversarial texts can undetectably affect valid texts, resulting in inaccurate outputs and security risks, particularly in safety-critical applications. While visual adversarial samples have been studied, research on NLP adversarial text is relatively young. This article presents a gradient-based adversarial technique in comparison to neural network-powered text classifiers to address this issue. The proposed approach renders the adversarial perturbation block-sparse, resulting in a sample that deviates from the original text by only a few words. Textual data is discrete; therefore, gradient projection determines the minimiser of the optimisation problem. Crafted samples were tested on the same pre-trained model, and the accuracy dropped significantly, confirming that the attack strategy was effective. The adversarial assault model demonstrates that NLP models are vulnerable to attack, underscoring the need for comprehensive protection in NLP applications. The results show that adversarial attacks can target even highly accurate models. This paper presents a new technique for developing defence mechanisms to improve the robustness of NLP models. To combat antagonistic texts, future study can examine alternative attack and defence approaches.
Keywords: Membership Inference Attack; Deep Neural Network; Adversarial Attacks; Machine Learning; NLP Models; Robustness and Security; Effective Attacks; Fraud Detection.
Received on: 14/12/2024, Revised on: 19/02/2025, Accepted on: 02/04/2025, Published on: 05/09/2025
DOI: 10.69888/FTSCL.2025.000431
FMDB Transactions on Sustainable Computer Letters, 2025 Vol. 3 No. 3, Pages: 158-172