信息网络安全 ›› 2024, Vol. 24 ›› Issue (10): 1570-1577.doi: 10.3969/j.issn.1671-1122.2024.10.011

• 入选论文 • 上一篇    下一篇

基于随机游走的图神经网络黑盒对抗攻击

芦效峰1(), 程天泽1, 龙承念2   

  1. 1.北京邮电大学网络空间安全学院,北京 100876
    2.上海交通大学电子信息与电气工程学院,上海 200240
  • 收稿日期:2024-04-16 出版日期:2024-10-10 发布日期:2024-09-27
  • 通讯作者: 芦效峰, luxf@bupt.edu.cn
  • 作者简介:芦效峰(1976—),男,山西,副教授,博士,主要研究方向为网络空间安全|程天泽(1999—),男,安徽,硕士研究生,主要研究方向为网络空间安全|龙承念(1977—),男,江西,教授,博士,主要研究方向为网络空间安全
  • 基金资助:
    国家自然科学基金(62136006)

A Random Walk Based Black-Box Adversarial Attack against Graph Neural Network

LU Xiaofeng1(), CHENG Tianze1, LONG Chengnian2   

  1. 1. School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing 100876, China
    2. School of Electronic Information and Electrical Engineering, Shanghai Jiaotong University, Shanghai 200240, China
  • Received:2024-04-16 Online:2024-10-10 Published:2024-09-27

摘要:

图神经网络在许多图分析任务中取得了显著的成功。然而,最近的研究揭示了其对对抗性攻击的易感性。现有的关于黑盒攻击的研究通常要求攻击者知道目标模型的所有训练数据,并且不适用于攻击者难以获得图神经网络节点特征表示的场景。文章提出了一种更严格的黑盒攻击模型,其中攻击者只知道选定节点的图结构和标签,但不知道节点特征表示。在这种攻击模型下,文章提出了一种针对图神经网络的黑盒对抗攻击方法。该方法近似每个节点对模型输出的影响,并使用贪心算法识别最优扰动。实验表明,虽然可用信息较少,但该算法的攻击成功率接近最先进的算法,同时实现了更高的攻击速度。此外,该攻击方法还具有迁移和防御能力。

关键词: 人工智能安全, 图神经网络, 对抗攻击

Abstract:

Graph neural networks have achieved remarkable success on many graph analysis tasks. However, recent studies have unveiled their susceptibility to adversarial attacks.The existing research on black box attacks often requires attackers to know all the training data of the target model, and is not applicable in scenarios where attackers have difficulty obtaining feature representations of graph neural network nodes.This paper proposed a more strict black-box attack model, where the attacker only possessed knowledge of the graph structure and labels of select nodes, but remained unaware of node feature representations. Under this attack model, this paper proposed a black-box adversarial attack method against graph neural networks. The approach approximated the influence of each node on the model output and identified optimal perturbations with greedy strategy. Experiments show that though less information is available, the attack success rate of this algorithm is close to that of the state-of-the-art algorithms, while achieving a higher attack speed. In addition, the attack method in this article also has migration and anti-defense capabilities.

Key words: artificial intelligence security, graph neural network, adversarial attacks

中图分类号: