| [1] |
KOKULU F B, SONEJI A, BAO T, et al. Matched and Mismatched SOCs: A Qualitative Study on Security Operations Center Issues[C]// ACM. The 2019 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2019: 1955-1970.
|
| [2] |
HASSAN W U, NOUREDDINE M A, DATTA P, et al. OmegaLog: High-Fidelity Attack Investigation via Transparent Multi-Layer Log Analysis[EB/OL]. (2020-02-23)[2025-03-01]. https://www.ndss-symposium.org/ndss-paper/omegalog-high-fidelity-attack-investigation-via-transparent-multi-layer-log-analysis/.
|
| [3] |
YU Le, MA Shiqing, ZHANG Zhuo, et al. ALchemist: Fusing Application and Audit Logs for Precise Attack Provenance Without Instrumentation[EB/OL]. (2021-02-21)[2025-03-01]. https://dx.doi.org/10.14722/ndss.2021.24445.
|
| [4] |
ZHONG Chen, LIN Tao, LIU Peng, et al. A Cyber Security Data Triage Operation Retrieval System[J]. Computers & Security, 2018, 76: 12-31.
doi: 10.1016/j.cose.2018.02.011
URL
|
| [5] |
HASSAN W U, GUO Shengjian, LI Ding, et al. NoDoze: Combatting Threat Alert Fatigue with Automated Provenance Triage[EB/OL]. (2019-02-22)[2025-03-01]. https://www.researchgate.net/publication/348915407_NoDoze_Combatting_Threat_Alert_Fatigue_with_Automated_Provenance_Triage.
|
| [6] |
DAS S, SAHA S, PRIYOTI A T, et al. Network Intrusion Detection and Comparative Analysis Using Ensemble Machine Learning and Feature Selection[J]. IEEE Transactions on Network and Service Management, 2022, 19(4): 4821-4833.
doi: 10.1109/TNSM.2021.3138457
URL
|
| [7] |
ATEFINIA R, AHMADI M. Network Intrusion Detection Using Multi-Architectural Modular Deep Neural Network[J]. The Journal of Supercomputing, 2021, 77(4): 3571-3593.
doi: 10.1007/s11227-020-03410-y
|
| [8] |
ZHANG Changlin, TONG Xin, TONG Hui, et al. A Survey of Large Language Models in the Domain of Cybersecurity[J]. Netinfo Security, 2024, 24(5): 778-793.
|
|
张长琳, 仝鑫, 佟晖, 等. 面向网络安全领域的大语言模型技术综述[J]. 信息网络安全, 2024, 24(5):778-793.
|
| [9] |
CUI Yiming, YANG Ziqing, YAO Xin. Efficient and Effective Text Encoding for Chinese Llama and Alpaca[EB/OL]. (2024-02-23)[2025-03-01]. https://doi.org/10.48550/arXiv.2304.08177.
|
| [10] |
ZENG Aohan, XU Bin, WANG Bowen, et al. ChatGLM: A Family of Large Language Models from GLM-130b to GLM-4 All Tools[EB/OL]. (2024-07-30)[2025-03-01]. https://doi.org/10.48550/arXiv.2406.12793.
|
| [11] |
BAI Jinze, BAI Shuai, CHU Yunfei, et al. Qwen Technical Report[EB/OL]. (2023-09-28)[2025-03-01]. https://doi.org/10.48550/arXiv.2309.16609.
|
| [12] |
BROWN T, MANN B, RYDER N, et al. Language Models are Few-Shot Learners[J]. Neural Information Processing Systems, 2020(33): 1877-1901.
|
| [13] |
WEI J, WANG Xuezhi, SCHUURMANS D, et al. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models[J]. Neural Information Processing Systems, 2022(35): 24824-24837.
|
| [14] |
LIN C Y. ROUGE: A Package for Automatic Evaluation of Summaries[C]// ACL. 2004 Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2004: 74-81.
|
| [15] |
ACHIAM J, ADLER S, AGARWAL S, et al. GPT-4 Technical Report[EB/OL]. (2023-03-15)[2025-03-01]. https://doi.org/10.48550/arXiv.2303.08774.
|
| [16] |
ZHENG Yaowei, ZHANG Richong, ZHANG Junhao, et al. LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models[EB/OL]. (2024-06-27)[2025-03-01]. https://doi.org/10.48550/arXiv.2403.13372.
|
| [17] |
LYU Kai, YANG Yuqing, LIU Tengxiao, et al. Full Parameter Fine-Tuning for Large Language Models with Limited Resources[C]// ACL. The 62nd Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2024: 8187-8198.
|
| [18] |
ZHANG Qintong, WANG Yuchao, WANG Hexi, et al. Comprehensive Review of Large Language Model Fine-Tuning[J]. Computer Engineering and Applications, 2024, 60(17): 17-33.
doi: 10.3778/j.issn.1002-8331.2312-0035
|
|
张钦彤, 王昱超, 王鹤羲, 等. 大语言模型微调技术的研究综述[J]. 计算机工程与应用, 2024, 60(17): 17-33.
doi: 10.3778/j.issn.1002-8331.2312-0035
|
| [19] |
HU E J, SHEN Yelong, WALLIS P, et al. LoRA: Low-Rank Adaptation of Large Language Models[EB/OL]. (2021-10-16)[2025-03-01]. https://doi.org/10.48550/arXiv.2106.09685.
|