信息网络安全 ›› 2024, Vol. 24 ›› Issue (2): 262-271.doi: 10.3969/j.issn.1671-1122.2024.02.009

• 理论研究 • 上一篇    下一篇

基于触发器逆向的联邦学习后门防御方法

林怡航, 周鹏远(), 吴治谦, 廖勇   

  1. 中国科学技术大学网络空间安全学院,合肥 230031
  • 收稿日期:2023-10-23 出版日期:2024-02-10 发布日期:2024-03-06
  • 通讯作者: 周鹏远 E-mail:pyzhou@ustc.edu.cn
  • 作者简介:林怡航(1998—),男,福建,硕士研究生,主要研究方向为联邦学习|周鹏远(1989—),男,天津,副研究员,博士,CCF会员,主要研究方向为可信人工智能、元宇宙和普适计算|吴治谦(1997—),男,浙江,硕士研究生,主要研究方向为知识图谱|廖勇(1980—),男,湖南,教授,博士,主要研究方向为大数据处理与分析及其在网络安全领域的应用
  • 基金资助:
    国家重点研发计划(2021YFC3300500)

Federated Learning Backdoor Defense Method Based on Trigger Inversion

LIN Yihang, ZHOU Pengyuan(), WU Zhiqian, LIAO Yong   

  1. School of Cyber Science and Technology, University of Science and Technology of China, Hefei 230031, China
  • Received:2023-10-23 Online:2024-02-10 Published:2024-03-06
  • Contact: ZHOU Pengyuan E-mail:pyzhou@ustc.edu.cn

摘要:

联邦学习作为一种新兴分布式机器学习范式,实现了多客户间的分布式协同模型训练,不需要上传用户的原始数据,从而保护了用户隐私。然而,在联邦学习中由于服务器无法审查客户端的本地数据集,恶意客户端可通过数据投毒将后门嵌入全局模型。传统的联邦学习后门防御方法大多基于模型检测的思想进行后门防御,而忽略了联邦学习自身的分布式特性。因此,文章提出一种基于触发器逆向的联邦学习后门防御方法,使聚合服务器和分布式客户端协作,利用触发器逆向技术生成额外的数据,增强客户端本地模型的鲁棒性,从而进行后门防御。在不同数据集上进行实验,实验结果表明,文章提出的方法可以有效防御后门攻击。

关键词: 联邦学习, 后门攻击, 后门防御, 鲁棒性训练, 触发器逆向

Abstract:

As an emerging distributed machine learning paradigm, federated learning realizes distributed collaborative model training among multiple clients without uploading user original data, thereby protecting user privacy. However, since the server cannot inspect the client’s local dataset in federated learning, malicious clients can embed the backdoor into the global model by data poisoning. Traditional federated learning backdoor defense methods are mostly based on the idea of model detection for backdoor defense, but ignore the inherent distributed feature of federated learning. Therefore, this paper proposed a federated learning backdoor defense method based on trigger inversion. The aggregation server and distributed clients collaborated to generate additional data using trigger reverse technology to enhance the robustness of the client’s local model for backdoor defense. Experiments on different datasets, and the results show that the proposed method can mitigate backdoor attacks effectively.

Key words: federated learning, backdoor attack, backdoor defense, robustness training, trigger inversion

中图分类号: