Netinfo Security ›› 2025, Vol. 25 ›› Issue (10): 1570-1578.doi: 10.3969/j.issn.1671-1122.2025.10.008

Previous Articles     Next Articles

Research on the Application of Large Language Model in False Positive Handling for Managed Security Services

HU Longhui1, SONG Hong1(), WANG Weiping1, YI Jia2, ZHANG Zhixiong2   

  1. 1. School of Computer Science and Engineering, Central South University, Changsha 410083, China
    2. Sangfor Technologies Co., Ltd., Shenzhen 518052, China
  • Received:2025-03-03 Online:2025-10-10 Published:2025-11-07
  • Contact: SONG Hong E-mail:songhong@csu.edu.cn

Abstract:

When the managed security services are provided by a third party, the deployment of unified security detection rules frequently results in false positive alerts due to the difference of enterprise user networks. This typically requires manual adaption to security rules or alert filtering based on user’s feedback. The article proposed an automated method for processing user feedback for this application scenario. The method automatically extracted statements related to false positive alert filtering from user’s feedback and converted them into alert filtering rules for security devices. This method was based on a large language model, combined with two prompt engineering techniques of chain-of-thought and few-shot prompting, to extract alarm filtering statements from user feedback. To further enhanced the extraction performance, the secure corpus generated by GPT-4 was used to fine tune the instructions of the ChatGLM4 and Qwen1.5 language models with the best performance. The experimental results show that this method achieves a Rouge-L index of 92.208% in the task of extracting false alarm filtering related statements, which can effectively reduce the workload of manually reviewing user feedback.

Key words: managed security service, alarm filtering, large language model, prompt engineering, instruction fine-tuning

CLC Number: