信息网络安全 ›› 2025, Vol. 25 ›› Issue (6): 920-932.doi: 10.3969/j.issn.1671-1122.2025.06.007

• 专题论文: 网络主动防御 • 上一篇    下一篇

基于掩码的选择性联邦蒸馏方案

朱率率1,2, 刘科乾2()   

  1. 1.网络与信息安全保密武警部队重点实验室,西安 710086
    2.武警工程大学密码工程学院,西安 710086
  • 收稿日期:2025-03-26 出版日期:2025-06-10 发布日期:2025-07-11
  • 通讯作者: 刘科乾 2753224405@qq.com
  • 作者简介:朱率率(1985—),男,山东,教授,博士,主要研究方向为后量子密码、人工智能安全、隐私保护|刘科乾(2000—),男,四川,硕士研究生,主要研究方向为数据隐私保护。
  • 基金资助:
    陕西省自然科学基础研究计划(2024JC-YBMS-546)

A Masking-Based Selective Federated Distillation Scheme

ZHU Shuaishuai1,2, LIU Keqian2()   

  1. 1. Key Laboratory of Network and Information Security under PAP, Xi’an 710086 China
    2. College of Cryptography Engineering, Engineering University of PAP, Xi’an 710086 China
  • Received:2025-03-26 Online:2025-06-10 Published:2025-07-11

摘要:

随着机器学习技术的不断进步,隐私保护问题越来越被重视。联邦学习作为一种分布式机器学习框架,得到了广泛应用,然而,其在实际应用中仍面临隐私泄露和低效率的挑战。为了应对上述挑战,文章提出基于掩码的选择性联邦蒸馏(MSFD)方案,该方案利用联邦蒸馏通过传递知识而非模型参数的特点,有效抵御白盒攻击,同时降低通信开销。通过在共享的软标签中引入AES加密后的掩码机制,有效解决选择性联邦蒸馏明文共享软标签易受黑盒攻击威胁的问题,显著提升对黑盒攻击的抵御能力,从而大幅提高选择性联邦蒸馏方案的安全性。通过在客户端软标签中嵌入动态加密掩码,实现隐私混淆,并结合秘密信道协商与轮次密钥更新机制,显著降低遭受黑盒攻击的风险并保持模型性能,兼顾联邦学习的安全性与通信效率。通过安全性分析和实验结果表明,MSFD在多个数据集上能够显著降低黑盒攻击成功率,同时保持分类准确率,有效提升隐私保护能力。

关键词: 联邦学习, 知识蒸馏, 掩码机制, 联邦蒸馏, 隐私保护

Abstract:

With the continuous advancement of machine learning technology, privacy protection issues are becoming increasingly important. Federated learning, as a distributed machine learning framework, has been widely applied. However, it still faces challenges in terms of privacy leakage and efficiency in practical applications. In order to address the above challenges, the article proposed masking-based selective federated distillation (MSFD), which utilizes the characteristic of knowledge transfer rather than model parameters in federated distillation to effectively resist white box attacks and reduce communication overhead. By introducing AES encrypted masking mechanism into the shared soft tags, the problem of selective federated distillation plaintext shared soft tags being vulnerable to black box attacks was effectively solved, significantly improving the resistance to black box attacks and thus significantly enhancing the security of selective federated distillation schemes. By embedding dynamic encryption masked in client soft tags to achieve privacy obfuscation, and combining secret channel negotiation and round key update mechanisms, the risk of black box attacks was significantly reduced while maintaining model performance, balancing the security and communication efficiency of federated learning. Through security analysis and experimental results, it has been shown that MSFD can significantly reduce the success rate of black box attacks on multiple datasets, while maintaining classification accuracy and effectively improving privacy protection capabilities.

Key words: federated learning, knowledge distillation, mask mechanism, federated distillation, privacy protection

中图分类号: