No Cover Image

Journal article 315 views 83 downloads

Vision Transformer With Adversarial Indicator Token Against Adversarial Attacks in Radio Signal Classifications

Lu Zhang, Sangarapillai Lambotharan Orcid Logo, Gan Zheng Orcid Logo, Guisheng Liao Orcid Logo, Xuekang Liu Orcid Logo, Fabio Roli Orcid Logo, Carsten Maple Orcid Logo

IEEE Internet of Things Journal, Volume: 12, Issue: 17, Pages: 35367 - 35379

Swansea University Author: Lu Zhang

  • 69947.pdf

    PDF | Accepted Manuscript

    Author accepted manuscript document released under the terms of a Creative Commons CC-BY licence using the Swansea University Research Publications Policy (rights retention).

    Download (5.15MB)

Abstract

The remarkable success of transformers across various fields such as natural language processing and computer vision has paved the way for their applications in automatic modulation classification, a critical component in the communication systems of Internet of Things (IoT) devices. However,it has...

Full description

Published in: IEEE Internet of Things Journal
ISSN: 2327-4662
Published: Institute of Electrical and Electronics Engineers (IEEE) 2025
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa69947
Abstract: The remarkable success of transformers across various fields such as natural language processing and computer vision has paved the way for their applications in automatic modulation classification, a critical component in the communication systems of Internet of Things (IoT) devices. However,it has been observed that transformer-based classification of radio signals is susceptible to subtle yet sophisticated adversarial attacks. To address this issue, we have developed a defensivestrategy for transformer-based modulation classification systems to counter such adversarial attacks. In this paper, we propose a novel vision transformer (ViT) architecture by introducing a newconcept known as adversarial indicator (AdvI) token to detect adversarial attacks. To the best of our knowledge, this is the first work to propose an AdvI token in ViT to defend against adversarial attacks. Integrating an adversarial training method with a detection mechanism using AdvI token, we combine a training time defense and running time defense in a unified neural network model, which reduces architectural complexity of the system compared to detecting adversarial perturbations using separate models. We investigate into the operational principles of our method by examining the attention mechanism. We show the proposed AdvI token acts as a crucial element within the ViT,influencing attention weights and thereby highlighting regions or features in the input data that are potentially suspicious or anomalous. Through experimental results, we demonstrate that our approach surpasses several competitive methods in handling white-box attack scenarios, including those utilizing the fast gradient method, projected gradient descent attacks and basic iterative method.
College: Faculty of Science and Engineering
Funders: This work is supported by UKRI through the research grants EP/R007195/1 (Academic Centre of Excellence in Cyber Security Research - University of Warwick); National Hub for Edge (Grant Number: AI EP/Y028813/1); UK Research and Innovation (Grant Number: EP/X012301/1, EP/X04047X/1 and EP/Y037243/1); SERICS (Grant Number: PE00000014); FAIR through the MUR National Recovery and Resilience Plan; European Union—NextGenerationEU (Grant Number: PE00000013) and EP/Y028813/1 (National Hub for Edge AI). S. Lambotharan would like to acknowledge the financial support of the Engineering and Physical Sciences Research Council (EPSRC) projects under grant EP/X012301/1, EP/X04047X/1, and EP/Y037243/1. This work was partially supported by projects SERICS (PE00000014) and FAIR (PE00000013) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU.
Issue: 17
Start Page: 35367
End Page: 35379