Netinfo Security ›› 2024, Vol. 24 ›› Issue (12): 1799-1818.doi: 10.3969/j.issn.1671-1122.2024.12.001

Previous Articles     Next Articles

A Survey on Trusted Execution Environment Based Secure Inference

SUN Yu1, XIONG Gaojian1, LIU Xiao2(), LI Yan3   

  1. 1. School of Cyber Science and Technology, Beihang University, Beijing 100191, China
    2. China Telecom Digital Intelligence Technology Co., Ltd., Beijing 100036, China
    3. Phytium Information Technology Co., Ltd., Beijiing 100083, China
  • Received:2024-10-15 Online:2024-12-10 Published:2025-01-10

Abstract:

Machine learning technologies, especially deep neural networks, have gained popularity in various fields such as autonomous driving, smart homes, and voice assistants. In scenarios with high real-time requirements, many service providers deploy models on edge devices to avoid network latency and communication costs. However, service providers have no absolute control of edge devices, making deployed models vulnerable to attacks like model stealing, fault injection, and membership inference. This can lead to serious consequences such as theft of high-value models, manipulation of inference results, and leakage of private training data, ultimately undermining the competitiveness of service providers. To address these issues, numerous researchers have worked on trusted execution environments (TEE) based secure inference, which ensures security while maintaining model availability. This paper began by introducing relevant background knowledge, providing a definition of secure inference, and summarizing security models in edge deployment scenarios. Subsequently, existing solutions for model confidentiality and inference integrity were categorized and introduced, with a comparative analysis and summary. Finally, the paper outlined research challenges and directions for the future of secure inference.

Key words: secure inference, trusted execution environment, model confidentiality, inference integrity, edge deployment

CLC Number: