Netinfo Security ›› 2026, Vol. 26 ›› Issue (4): 503-520.doi: 10.3969/j.issn.1671-1122.2026.04.001

Previous Articles     Next Articles

A Survey of Privacy-Preserving Techniques for Large Language Model Inference

CUI Jinhua(), DONG Liang, YANG Xin   

  1. College of Semiconductors (College of Integrated Circuits), Hunan University, Changsha 410082, China
  • Received:2025-09-28 Online:2026-04-10 Published:2026-04-29

Abstract:

Large language model (LLM) have been widely applied in fields such as healthcare, finance, and justice. However, during the inference phase, the privacy risks of LLM are particularly prominent. From the perspective of privacy risks, this paper conducted a systematic analysis of the potential threats in the inference phase and classifies them according to different objects of privacy leakage. Subsequently, it outlined the existing privacy-preserving methods, classified them into cryptography-based, detection-based, and trusted execution environment-based methods according to their technical paths, and focused on discussing the advantages and limitations of each type of method. Furthermore, this paper conducted an in-depth comparison and analysis of different methods from four dimensions, including security, efficiency, scalability, and deploysment complexity. Finally, based on the current research status and challenges, it summarized the future research directions and potential solutions for enhancing LLM privacy protection in the inference phase.

Key words: large language model, inference phase, privacy protection, trusted execution environment

CLC Number: