信息网络安全 ›› 2026, Vol. 26 ›› Issue (1): 49-58.doi: 10.3969/j.issn.1671-1122.2026.01.004

• 专题论文:网络主动防御 • 上一篇    下一篇

一种对抗GAN攻击的联邦隐私增强方法研究

施寅生(), 包阳, 庞晶晶   

  1. 军事科学院系统工程研究院,北京 100010
  • 收稿日期:2025-11-07 出版日期:2026-01-10 发布日期:2026-02-13
  • 通讯作者: 施寅生 shiyinshengjms@163.com
  • 作者简介:施寅生(1983—),男,河南,助理研究员,硕士,主要研究方向为网络安全|包阳(1978—),男,辽宁,研究员,硕士,主要研究方向为网络安全|庞晶晶(1992—),女,河南,硕士,主要研究方向为科技信息情报

Research on a Federated Privacy Enhancement Method against GAN Attacks

SHI Yinsheng(), BAO Yang, PANG Jingjing   

  1. Institute of Systems Engineering, Academy of Military Sciences, People’s Liberation Army of China, Beijing 100010, China
  • Received:2025-11-07 Online:2026-01-10 Published:2026-02-13

摘要:

联邦学习通过分布式训练避免数据集中存储,然而,仍存在恶意客户端利用生成式对抗网络(GAN)攻击窃取隐私数据的风险。传统的差分隐私和加密机制等防御手段,存在模型性能与隐私效果权衡难或计算成本高等问题。文章针对联邦学习在图像识别任务中面临的GAN攻击风险,提出一种基于Rényi差分隐私的隐私增强方法,旨在提升模型的数据隐私性。Rényi差分隐私的串行组合机制使得在多轮迭代中隐私预算增长速率从传统差分隐私的线性降为亚线性,可有效降低噪声添加量。文章方法利用Rényi差分隐私紧密的噪声组合特性,在客户端梯度更新参数时,通过基于权均衡权重的梯度裁剪和优化的高斯噪声添加,实现差分隐私计算,进而降低隐私泄露风险,同时平衡模型可用性。实验表明,文章方法在模型全局准确性受影响程度可接受的前提下,实现本地数据的隐私保护,增强模型的隐私保护能力,进而有效抵御GAN攻击,保障图像数据隐私性。

关键词: 联邦学习, GAN攻击, Rényi差分隐私, 隐私增强

Abstract:

Federated learning mitigates the risks of centralized data storage through distributed training, yet remains vulnerable to malicious clients exploiting GAN attacks to steal private data. Traditional defense methods such as differential privacy and encryption mechanisms suffer from challenges in balancing model performance and privacy effectiveness or incur high computational costs. To address the threat of GAN attacks in federated learning for image recognition tasks, this paper proposes a privacy enhancement method based on Rényi differential privacy (RDP) to improve data privacy. The serial composition mechanism of Rényi differential privacy enables the privacy budget growth rate in multi-round iterations to transition from the linear scaling of traditional differential privacy to sublinear scaling, effectively reducing the amount of noise added. Thus, the method leverages the tight noise composition properties of RDP by incorporating gradient clipping based on weight equilibrium and optimized Gaussian noise injection into client-side gradient updates. This approach enables differential privacy-preserving computations, effectively reducing privacy leakage risks while balancing model utility. Experiments show that the method realizes local data privacy protection and enhances the privacy protection ability of the model under the premise that the degree of impact on the model’s global accuracy remains acceptable, so as to effectively resist GAN attacks and ensure the privacy of image data.

Key words: federated learning, GAN attacks, differential privacy, privacy enhancement

中图分类号: