信息网络安全 ›› 2025, Vol. 25 ›› Issue (11): 1762-1773.doi: 10.3969/j.issn.1671-1122.2025.11.009

• 专题论文:机密计算 • 上一篇    下一篇

基于可信执行环境的联邦学习分层动态防护算法

王亚杰1, 陆锦标1, 李宇航1, 范青2, 张子剑1(), 祝烈煌1   

  1. 1.北京理工大学网络空间安全学院北京 100081
    2.华北电力大学控制与计算机工程学院北京 102206
  • 收稿日期:2025-07-27 出版日期:2025-11-10 发布日期:2025-12-02
  • 通讯作者: 张子剑 wangyajie19@bit.edu.cn
  • 作者简介:王亚杰(1993—),男,河北,助理研究员,博士,CCF会员,主要研究方向为人工智能安全、数据安全与隐私保护|陆锦标(2002—),男,广东,硕士研究生,主要研究方向为人工智能安全、隐私保护|李宇航(2002—),男,江苏,硕士研究生,主要研究方向为人工智能安全、隐私保护|范青(1996—),女,山东,副教授,博士,CCF会员,主要研究方向为应用密码学、信息安全与安全协议设计|张子剑(1984—),男,北京,教授,博士,CCF会员,主要研究方向为区块链安全、通信安全、数据隐私|祝烈煌(1976—),男,浙江,教授,博士,CCF会员,主要研究方向为密码算法及安全协议、区块链技术、云计算安全、大数据隐私保护
  • 基金资助:
    国家重点研发计划(2023YFF0905300)

Hierarchical Dynamic Protection Algorithm for Federated Learning Based on Trusted Execution Environment

WANG Yajie1, LU Jinbiao1, LI Yuhang1, FAN Qing2, ZHANG Zijian1(), ZHU Liehuang1   

  1. 1. School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China
    2. School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China
  • Received:2025-07-27 Online:2025-11-10 Published:2025-12-02

摘要:

在现有联邦学习隐私保护方案中,基于硬件的可信执行环境(TEE)由于其高效且安全的特性,逐渐成为新的范式。然而,受限于硬件条件,TEE保护层数过多,训练效率会急剧下降;TEE保护层数过少,则难以充分保障隐私安全。针对此问题,文章提出一种基于可信执行环境的联邦学习分层动态防护算法。该算法在服务端设计敏感层动态选择机制,通过逐层贪婪训练法实现内存受限环境下的安全参数优化,结合对抗鲁棒性评估模型量化不同敏感层配置的防御效能,据此确定需保护的敏感层;在客户端通过分层可信训练机制,采用双通道参数聚合实现敏感层与非敏感层的差异化训练。实验结果表明,分层防护算法能够有效阻断特征语义的连贯性,使得替代模型的决策边界与原始模型产生系统性偏差。在抵御多种对抗攻击方面,文章算法能够显著降低目标攻击的效果,降幅可达82%以上。此外,文章验证了该算法对数据投毒攻击的防御效能,在梯度投毒场景下,模型的准确率可提升35%以上。

关键词: 联邦学习, 可信执行环境, 隐私保护

Abstract:

In existing privacy-preserving schemes for federated learning, hardware-based trusted execution environment (TEE) have emerged as a new paradigm due to their efficiency and security. However, constrained by hardware limitations, protecting too many layers with TEE drastically reduces training efficiency, while protecting too few layers compromises privacy. To address this challenge, this paper proposed a hierarchical dynamic protection algorithm for federated learning based on TEE. Specifically, a sensitive layer dynamic selection mechanism was designed on the server side. This mechanism achieved secure parameter optimization under memory constraints through layer-wise greedy training. It combined an adversarial robustness evaluation model to quantify the defensive efficacy of different sensitive layer configurations, thereby determining which layered require protection. On the client side, a hierarchical trusted training mechanism was implemented using dual-channel parameter aggregation to enable differentiated training for sensitive and non-sensitive layers. Experiments demonstrate that this hierarchical protection strategy effectively disrupts the semantic continuity of features, inducing systematic deviations between the decision boundaries of substitute models and the original model. The algorithm significantly mitigates various adversarial attacks, reducing the effectiveness of targeted attacks by up to 82%. Furthermore, the study validates the algorithm’s defense capability against data poisoning attacks, showing that model accuracy can recover by over 35% in gradient poisoning scenarios.

Key words: federated learning, trusted execution environment, privacy protection

中图分类号: