Netinfo Security ›› 2025, Vol. 25 ›› Issue (11): 1762-1773.doi: 10.3969/j.issn.1671-1122.2025.11.009

Previous Articles     Next Articles

Hierarchical Dynamic Protection Algorithm for Federated Learning Based on Trusted Execution Environment

WANG Yajie1, LU Jinbiao1, LI Yuhang1, FAN Qing2, ZHANG Zijian1(), ZHU Liehuang1   

  1. 1. School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing 100081, China
    2. School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China
  • Received:2025-07-27 Online:2025-11-10 Published:2025-12-02

Abstract:

In existing privacy-preserving schemes for federated learning, hardware-based trusted execution environment (TEE) have emerged as a new paradigm due to their efficiency and security. However, constrained by hardware limitations, protecting too many layers with TEE drastically reduces training efficiency, while protecting too few layers compromises privacy. To address this challenge, this paper proposed a hierarchical dynamic protection algorithm for federated learning based on TEE. Specifically, a sensitive layer dynamic selection mechanism was designed on the server side. This mechanism achieved secure parameter optimization under memory constraints through layer-wise greedy training. It combined an adversarial robustness evaluation model to quantify the defensive efficacy of different sensitive layer configurations, thereby determining which layered require protection. On the client side, a hierarchical trusted training mechanism was implemented using dual-channel parameter aggregation to enable differentiated training for sensitive and non-sensitive layers. Experiments demonstrate that this hierarchical protection strategy effectively disrupts the semantic continuity of features, inducing systematic deviations between the decision boundaries of substitute models and the original model. The algorithm significantly mitigates various adversarial attacks, reducing the effectiveness of targeted attacks by up to 82%. Furthermore, the study validates the algorithm’s defense capability against data poisoning attacks, showing that model accuracy can recover by over 35% in gradient poisoning scenarios.

Key words: federated learning, trusted execution environment, privacy protection

CLC Number: