Netinfo Security ›› 2024, Vol. 24 ›› Issue (9): 1396-1408.doi: 10.3969/j.issn.1671-1122.2024.09.008

Previous Articles     Next Articles

A Prompt-Focused Privacy Evaluation and Obfuscation Method for Large Language Model

JIAO Shiqin, ZHANG Guiyang, LI Guoqi()   

  1. School of Reliability and Systems Engineering, Beihang University, Beijing 100191, China
  • Received:2024-04-11 Online:2024-09-10 Published:2024-09-27

Abstract:

Although the impressive performance of large language model (LLM) in semantic understanding, frequent user interactions introduce many privacy risks. This paper evaluated the privacy evaluation of existing LLM through partial recall attacks and simulated inference games. The findings indicate that common LLM still face two challenging privacy risks: data anonymization can degrade the quality of model responses, and potential privacy information can still be inferred through reasoning. To address these challenges, this paper proposed a prompt-focused privacy evaluation and obfuscation method for large language model. The method unfolded through a structured process, including initial description decomposition, generation of fabricated descriptions, and description obfuscation. The experimental results show that the proposed method effectively enhances privacy protection, as evidenced by the reduction in normalized Levenshtein distance, Jaccard similarity, and cosine similarity between pre-processed and post-processed model responses compared to existing methods. Additionally, this approach significantly limits the inference capabilities of LLM, with accuracy dropping from 97.14% in unprocessed models to 34.29%. This study not only deepens the understanding of privacy risks in LLM interactions but also introduces a comprehensive approach to enhance user privacy security, effectively addressing the aforementioned challenging privacy risk scenarios.

Key words: privacy risk, LLM, prompt project, description obfuscation

CLC Number: