信息网络安全 ›› 2024, Vol. 24 ›› Issue (12): 1799-1818.doi: 10.3969/j.issn.1671-1122.2024.12.001

• 综述论文 • 上一篇    下一篇

基于可信执行环境的安全推理研究进展

孙钰1, 熊高剑1, 刘潇2(), 李燕3   

  1. 1.北京航空航天大学网络空间安全学院,北京 100191
    2.中电信数智科技有限公司,北京 100036
    3.飞腾信息技术有限公司,北京 100083
  • 收稿日期:2024-10-15 出版日期:2024-12-10 发布日期:2025-01-10
  • 通讯作者: 刘潇 liux45@chinatelecom.cn
  • 作者简介:孙钰(1985—),男,山东,副教授,博士,主要研究方向为智能系统安全|熊高剑(2000—),男,贵州,博士研究生,主要研究方向为人工智能安全|刘潇(1989—),男,北京,博士,主要研究方向为云计算安全、人工智能安全|李燕(1984—),女,河北,硕士,主要研究方向为信息技术应用创新
  • 基金资助:
    国家自然科学基金(62472015);CCF-飞腾基金(202306)

A Survey on Trusted Execution Environment Based Secure Inference

SUN Yu1, XIONG Gaojian1, LIU Xiao2(), LI Yan3   

  1. 1. School of Cyber Science and Technology, Beihang University, Beijing 100191, China
    2. China Telecom Digital Intelligence Technology Co., Ltd., Beijing 100036, China
    3. Phytium Information Technology Co., Ltd., Beijiing 100083, China
  • Received:2024-10-15 Online:2024-12-10 Published:2025-01-10

摘要:

近年来,以深度神经网络为代表的机器学习技术在自动驾驶、智能家居和语音助手等领域获得了广泛应用。在上述高实时要求场景下,多数服务商将模型部署在边缘设备以规避通信带来的网络时延与通信开销。然而,边缘设备不受服务商控制,所部署模型易遭受模型窃取、错误注入和成员推理等攻击,进而导致高价值模型失窃、推理结果操纵及私密数据泄露等严重后果,使服务商市场竞争力受到致命打击。为解决上述问题,众多学者致力于研究基于可信执行环境(TEE)的安全推理,在保证模型可用性条件下保护模型的参数机密性与推理完整性。文章首先介绍相关背景知识,给出安全推理的定义,并归纳其安全模型;然后对现有TEE安全推理的模型机密性保护方案与推理完整性保护方案进行了分类介绍和比较分析;最后展望了TEE安全推理的未来研究方向。

关键词: 安全推理, 可信执行环境, 模型机密性, 推理完整性, 边缘部署

Abstract:

Machine learning technologies, especially deep neural networks, have gained popularity in various fields such as autonomous driving, smart homes, and voice assistants. In scenarios with high real-time requirements, many service providers deploy models on edge devices to avoid network latency and communication costs. However, service providers have no absolute control of edge devices, making deployed models vulnerable to attacks like model stealing, fault injection, and membership inference. This can lead to serious consequences such as theft of high-value models, manipulation of inference results, and leakage of private training data, ultimately undermining the competitiveness of service providers. To address these issues, numerous researchers have worked on trusted execution environments (TEE) based secure inference, which ensures security while maintaining model availability. This paper began by introducing relevant background knowledge, providing a definition of secure inference, and summarizing security models in edge deployment scenarios. Subsequently, existing solutions for model confidentiality and inference integrity were categorized and introduced, with a comparative analysis and summary. Finally, the paper outlined research challenges and directions for the future of secure inference.

Key words: secure inference, trusted execution environment, model confidentiality, inference integrity, edge deployment

中图分类号: