Loading...

Table of Content

    10 October 2025, Volume 25 Issue 10 Previous Issue    Next Issue

    For Selected: Toggle Thumbnails
    A Review of Safety Detection and Evaluation Technologies for Large Models
    HU Bin, HEI Yiming, WU Tiejun, ZHENG Kaifa, LIU Wenzhong
    2025, 25 (10):  1477-1492.  doi: 10.3969/j.issn.1671-1122.2025.10.001
    Abstract ( 230 )   HTML ( 53 )   PDF (19871KB) ( 135 )  

    With the rapid development of artificial intelligence technology, large language models (LLMs) have emerged in many fields such as scientific research, education, finance, and healthcare due to their powerful natural language processing capabilities. However, as LLMs are widely adopted, they bring a series of security issues: risks of bias and discrimination, potential generation of harmful content, threats to user privacy leakage, risks of misleading information dissemination, and vulnerabilities to malicious adversarial attacks. These risks may harm users and even impact social stability and ethical order, necessitating comprehensive security testing and evaluation of LLMs. This article primarily focused on current research on LLM security assessment, categorizing common security risks and reviewing mainstream security evaluation techniques. It also introduced relevant assessment methods, metrics, commonly used datasets, and tools, while summarizing key security evaluation standards and guidelines developed globally. Additionally, the paper discussed the technical concepts, principles, and implementation mechanisms of safety alignment, along with its evaluation framework. Finally, by analyzing the challenges faced in current LLM security assessment, it outlined future technological trends and research directions, aiming to provide guidance for academic and industrial research and practices.

    Figures and Tables | References | Related Articles | Metrics
    Review of Cyber Resilience Assessment Framework and Methods
    ZHANG Dalong, DING Shuguang, HAN Zhilong, FU Shouli, TANG Zhiqing, SHI Lei
    2025, 25 (10):  1493-1505.  doi: 10.3969/j.issn.1671-1122.2025.10.002
    Abstract ( 83 )   HTML ( 22 )   PDF (15783KB) ( 55 )  

    Cyber resilience emphasizes the system’s ability of perception, resistance, recovery, and adaptation when facing disasters or attacks. Constructing a resilient cyberspace can reduce security collapses and meanwhile mitigate the damage caused by security collapses and recover quickly from them, thereby enhancing the security resilience of cyberspace. The primary task in developing cyber resilience is to assess cyber resilience. This paper first briefly introduced the concept of cyber resilience and the need for resilience assessment. Subsequently, we reviewed the existing research from two aspects: cyber resilience assessment frameworks and assessment methods. For assessment frameworks, a classification method for existing frameworks from the perspective of process-oriented and result-oriented was proposed. For assessment methods, an introduction to existing methods from qualitative and quantitative perspective was provided. Moreover, this paper furthermore discussed the advantages and challenges associated with each type of framework and method. This analysis was important for guiding the application of existing frameworks and methods, as well as for researching new assessment frameworks and methods. Finally, we summarized and discuss the future directions of cyber resilience assessment.

    Figures and Tables | References | Related Articles | Metrics
    A Survey of Routing Technologies and Protocols in Polymorphic Networks
    LAN Jiachen, CHEN Xiarun, ZHOU Yangkai, WEN Weiping
    2025, 25 (10):  1506-1522.  doi: 10.3969/j.issn.1671-1122.2025.10.003
    Abstract ( 64 )   HTML ( 11 )   PDF (21325KB) ( 33 )  

    As polymorphic networks operate concurrently over a unified infrastructure, the heterogeneity and complexity among network modals become increasingly salient, raising stronger requirements for path trustworthiness and routing security. This paper presented a systematic survey of routing technologies and protocols in polymorphic networks. We reviewed representative network modals—content-centric, identity-centric, geographic-oriented, IP-oriented, and compute-oriented—summarizing their routing patterns, path-construction methods, and key characteristics, and contrasting their application scenarios together with security considerations. In particular, NDN embodied data-centric security; the IP modal had evolved BGP-SEC and RPKI to protect path integrity; and cross-modal settings had introduced trusted routing, path encryption, and path ran-domization to enhance reliability and resilience against attacks. Building on these observations, we further discussed cross-modal coordination mechanisms such as dynamic loading of network modals, identifier coexistence, and network compilation, and outlined research directions toward trustworthy path construction and verifiable network services. The goal of this survey is to provide a structured reference for the design of routing systems in polymorphic networks and to support their evolution toward a more efficient and secure next-generation network.

    Figures and Tables | References | Related Articles | Metrics
    Implementation Mechanism for TrustZone Paravirtualization and Containerization
    YU Fajiang, WANG Chaozhou
    2025, 25 (10):  1523-1536.  doi: 10.3969/j.issn.1671-1122.2025.10.004
    Abstract ( 54 )   HTML ( 8 )   PDF (15692KB) ( 16 )  

    TrustZone has been widely applied in mobile platforms. With the increasing application of ARM CPU in the cloud services, the demand to enhance the security of virtual machine computing environments and data using TrustZone has become increasingly prominent. However, the hardware-based trusted execution environment (TEE) provided by the basic TrustZone typically only supports applications running on the host. To address this issue, this paper proposed implementation mechanism for TrustZone paravirtualization and containerization called pvTEE, allowing client applications within virtual machines or containers to efficiently utilize the TEE of the host platform in parallel. pvTEE forwarded invocation requests of client applications within virtual machines or containers to trusted applications within the TEE through the front-end driver vTEEdriver, virtual device vTEE, host proxy vTEEproxy, and back-end driver TEEdriver. Client application within the host, virtual machines, or containers could only invoke trusted applications in their respective scenarios and could not access other instances. Meanwhile, the host, virtual machines, and containers each had independent log collection capabilities and secured storage services. pvTEE was implemented on a server based on the ARMv8.2 CPU, as well as in QEMU KVM virtual machines and Docker containers. Performance testing indicates that invoking trusted application for one complete RSA encryption and decryption operation by client application in a virtual machine scenario only incurs approximately 6% additional overhead compared to the host scenario.

    Figures and Tables | References | Related Articles | Metrics
    Fuzz Testing Method for Firmware in Cloud-Edge Collaborative Scenarios
    TAO Ci, WANG Yi, ZHANG Lei, CHEN Ping
    2025, 25 (10):  1537-1545.  doi: 10.3969/j.issn.1671-1122.2025.10.005
    Abstract ( 55 )   HTML ( 5 )   PDF (10922KB) ( 22 )  

    In the context of cloud-edge collaboration, ensuring the security of firmware for massive edge devices faces dual challenges: difficulties in state perception and low execution efficiency. As firmware is typically released in binary form, state perception methods relying on source code instrumentation are no longer applicable. Meanwhile, efficient full-system emulation of heterogeneous architectures such as ARM on x86 platforms represents a bottleneck in existing technologies, significantly limiting the throughput of fuzz testing. To address these issues, this paper proposed an efficient fuzz testing framework tailored for ARM architecture firmware. To overcome the performance bottleneck of cross-architecture emulation, this work the fork mechanism internally within QEMU, designing and implementing a lightweight, cross-architecture full-system virtual machine snapshot technology that did not rely on specific hardware (e.g., Intel VT-x), significantly enhancing testing efficiency. To achieve state perception without source code, this paper implemented multiple state identification methods based on network packet analysis, memory data clustering, and call stack analysis. Additionally, a unified proxy module supported transparent testing of complex targets such as network services. Experimental results demonstrate that the proposed framework achieves approximately a 19% improvement in testing efficiency, successfully reproduces known vulnerabilities such as CVE-2019-15232, and validates its capability to model program states under source-code-absent conditions, providing an effective solution for security testing in cloud-edge collaborative scenarios.

    Figures and Tables | References | Related Articles | Metrics
    Research on Universal Service Mode of Quantum Key Based on Dual Key Synchronization
    XIE Sijiang, FENG Yan, YAN Yalong, NING Fei
    2025, 25 (10):  1546-1553.  doi: 10.3969/j.issn.1671-1122.2025.10.006
    Abstract ( 54 )   HTML ( 10 )   PDF (9400KB) ( 13 )  

    A universal service mode of quantum key based on dual key synchronization was proposed in order to address the problems of quantum key service of quantum key distribution network, such as limited universality and lack of service quality assurance. In this general service mode, the application model of quantum key distribution network was first designed abstractly, the concrete ways of quantum key distribution network serving cryptographic systems, key management systems and applications were given. Secondly, four types of universal quantum key services, namely dynamic end-to-end key service, static end-to-end key service, dynamic group key service and static group key service, were proposed to realize the optimal adaptation of quantum key service to multiple cryptographic application scenarios. Then, a dual key synchronization mechanism based on dual types of key pools was proposed, which effectively addressed the problems of competitive use of quantum key and reliability assurance of quantum key service for the end-to-end quantum key distribution in quantum key distribution network. Finally, the four types of universal quantum key services and the dual key synchronization mechanism were implemented in quantum key service. Through the testing of quantum key service in quantum key distribution network, it has been verified that the proposed universal service mode of quantum key can support the effective integration of quantum key distribution technology and classic cryptographic applications, and can provide support for the large-scale application of quantum key distribution network.

    Figures and Tables | References | Related Articles | Metrics
    A Cumulant-Deep Learning Fusion Model for Underwater Modulation Recognition
    LI Guyue, ZHANG Zihao, MAO Chenghai, LYU Rui
    2025, 25 (10):  1554-1569.  doi: 10.3969/j.issn.1671-1122.2025.10.007
    Abstract ( 39 )   HTML ( 9 )   PDF (24071KB) ( 17 )  

    In complex and demanding underwater acoustic communication environments, modulation recognition technology is crucial for improving the anti-interception capabilities and information security of underwater communication systems. However, nonlinearity, multipath effects, and strong noise interference in underwater acoustic channels pose significant challenges to automatic modulation recognition (AMR). To address these challenges, this paper proposed a deep modulation recognition model (CRT) that integrates wavelet denoising and high-order cumulants. This model optimized the residual network (ResNet) and Transformer encoder (Trans-Encoder) architectures to model local and global temporal features, respectively. Furthermore, it integrated high-order cumulants based on the time-frequency distribution of underwater acoustic signals. This model achieves an average recognition accuracy of 93.56% for nine typical underwater modulation modes, a 2.4% improvement over the current best model. In particular, in low signal-to-noise ratio (SNR) environments of -10 dB to -2 dB, the recognition accuracy improves by over 10%, demonstrating the effectiveness and practical value of the CRT model in complex underwater acoustic scenarios.

    Figures and Tables | References | Related Articles | Metrics
    Research on the Application of Large Language Model in False Positive Handling for Managed Security Services
    HU Longhui, SONG Hong, WANG Weiping, YI Jia, ZHANG Zhixiong
    2025, 25 (10):  1570-1578.  doi: 10.3969/j.issn.1671-1122.2025.10.008
    Abstract ( 42 )   HTML ( 11 )   PDF (10947KB) ( 29 )  

    When the managed security services are provided by a third party, the deployment of unified security detection rules frequently results in false positive alerts due to the difference of enterprise user networks. This typically requires manual adaption to security rules or alert filtering based on user’s feedback. The article proposed an automated method for processing user feedback for this application scenario. The method automatically extracted statements related to false positive alert filtering from user’s feedback and converted them into alert filtering rules for security devices. This method was based on a large language model, combined with two prompt engineering techniques of chain-of-thought and few-shot prompting, to extract alarm filtering statements from user feedback. To further enhanced the extraction performance, the secure corpus generated by GPT-4 was used to fine tune the instructions of the ChatGLM4 and Qwen1.5 language models with the best performance. The experimental results show that this method achieves a Rouge-L index of 92.208% in the task of extracting false alarm filtering related statements, which can effectively reduce the workload of manually reviewing user feedback.

    Figures and Tables | References | Related Articles | Metrics
    Multi-Feature Fusion for Malicious PDF Document Detection Based on CNN-BiLSTM-CBAM
    WANG Youhe, SUN Yi
    2025, 25 (10):  1579-1588.  doi: 10.3969/j.issn.1671-1122.2025.10.009
    Abstract ( 50 )   HTML ( 9 )   PDF (12717KB) ( 49 )  

    In order to solve the problems that the existing detection methods of malicious PDF documents ignore the semantic relationship between features and are often limited to a single type of feature analysis, this paper proposed a detection scheme, which applied the CNN-BiLSTM-CBAM model and multi-feature fusion to the detection of malicious PDF documents. This method not only integrated the conventional and structural information extracted from static analysis, but also combined the API sequence information captured by dynamic analysis to build a comprehensive multi-dimensional feature set. First, the model used convolutional neural network to extract local features of feature set. Secondly, BiLSTM was used to capture the dependency and context-semantic relationship between features, and convolution block attention module (CBAM) was used to assign different weights to different features to screen out the most distinguishable key features. Finally, Softmax classifier was used to calculate the detection results. The experimental results show that compared with the existing methods, the proposed model shows significant advantages in key performance indicators such as accuracy, recall and F1 score, and effectively improves the detection performance of malicious PDF documents.

    Figures and Tables | References | Related Articles | Metrics
    Binary Code Similarity Detection Method Based on Multivariate Semantic Graph
    ZHANG Lu, JIA Peng, LIU Jiayong
    2025, 25 (10):  1589-1603.  doi: 10.3969/j.issn.1671-1122.2025.10.010
    Abstract ( 43 )   HTML ( 6 )   PDF (17400KB) ( 23 )  

    Binary code similarity detection is the basis for applications such as code cloning, vulnerability search, and software theft detection. However, binary codes lose the rich semantic information of the source code after compilation, while these codes often lack effective feature representation due to the diversity of the compilation process. To address this challenge, this paper proposed an innovative similarity detection architecture-SiamGGCN, which fused gated graph neural networks and attention mechanisms, and creatively introduced a multivariate semantic graph, which effectively combined the control flow information, sequence flow information and data flow information of assembly language, and provided a more accurate and comprehensive semantic parsing for similarity detection of binary codes. In this paper, the proposed method was experimentally validated on multiple datasets and a wide range of scenarios. The experimental results show that SiamGGCN significantly outperform the existing methods in terms of precision and recall, which fully demonstrates its superior performance and application potential in the field of binary code similarity detection.

    Figures and Tables | References | Related Articles | Metrics
    A Joint Detection Method for Physical and Digital Face Attacks Based on Common Forgery Clue Awareness
    LIANG Fengmei, PAN Zhenghao, LIU Ajian
    2025, 25 (10):  1604-1614.  doi: 10.3969/j.issn.1671-1122.2025.10.011
    Abstract ( 31 )   HTML ( 4 )   PDF (12237KB) ( 15 )  

    In practical applications, facial recognition systems face the dual threats of physical attacks and digital attacks. Due to the significant heterogeneity between these two types of attacks, different models are often relied upon to address them separately. To conserve computational resources and reduce hardware deployment costs, this paper proposed a joint physical and digital face attack detection method based on the contrastive language and image pre-training model, targeting the characteristic of notable distribution differences and clustering by attack type in the feature space. Firstly, an adaptive feature extraction module was proposed based on the mixture of experts (MoE) structure, achieving attack-type-adaptive feature selection through sparse activation combined with a shared branch; Secondly, an attack-agnostic learnable text prompt was proposed to explore the common forgery clues of physical and digital attacks, enabling effective aggregation of different attack feature clusters; Finally, a residual self-attention mechanism was introduced, and a fine-grained alignment loss was designed to optimize the extraction process of common forgery clues. Experimental results under the joint training protocol on the UniAttackData and JFSFDB datasets show that the proposed method achieves the lower average classification error rate (ACER) compared to other algorithms.

    Figures and Tables | References | Related Articles | Metrics
    Research on Network Asset Identification Technology Based on Graph Neural Network
    LI Tao, CHENG Baifeng
    2025, 25 (10):  1615-1626.  doi: 10.3969/j.issn.1671-1122.2025.10.012
    Abstract ( 121 )   HTML ( 11 )   PDF (13612KB) ( 39 )  

    Network assets are the sum of all digital assets such as equipment, information and applications owned by an organization in cyberspace that can be used by potential attackers. It is very important to identify network assets. In order to improve the efficiency and accuracy of network asset recognition, this paper designed a network asset recognition model based on graph neural network, which representd the asset response message in the form of a graph. The model could intuitively express the relationship between various elements in the text, and could use the connection relationship between nodes to retain the global graph information. The model consisted of three parts. Firstly, a heterogeneous graph containing three types of nodes and five types of edges was constructed based on the asset response message, then a two-level attention mechanism was introduced to train the two-layer convolutional neural network, and finally two types of loss functions were calculated and the final recognition results were obtained. Experiments using a sample set of 3000 network asset response messages achieves an identification accuracy of 92.38% after training, representing approximately 5% improvement over existing methods, which demonstrates the model’s effectiveness in asset recognition.

    Figures and Tables | References | Related Articles | Metrics
    Research on Cross Form Similarity Detection for C/C++ Code
    WANG Yanxin, JIA Peng, FAN Ximing, PENG Xi
    2025, 25 (10):  1627-1638.  doi: 10.3969/j.issn.1671-1122.2025.10.013
    Abstract ( 37 )   HTML ( 10 )   PDF (14765KB) ( 22 )  

    Binary-source code similarity detection plays an important role in tasks related to software development and security, such as reverse engineering and copyright infringement detection. Although the current methods for binary-source code similarity detection have achieved good results, the goal is still to perform similarity detection between binary code and source code under the same architecture, compiler, and optimization level. In actual detection, the binary files being detected are often different architectures, compilers, and optimization levels. Distinguishing and detecting them will bring additional time overhead and challenges to feature extraction design. To this end, the paper proposed a cross architecture, compiler, and optimization level binary-source code similarity detection method based on intermediate representations. It converted binary into intermediate representations that can be converted between different platforms and programming languages at the binary end to reduce semantic differences in homologous binary files under different compilation status. The CodeBERT model was used to extract source code features, while the BERT model and GCN model were used to extract binary file features. The cosine similarity was used to calculate the similarity between the two ends. In order to verify the effectiveness of the method, the paper compiled 7 components into binary files and constructed a dataset using different compilers, optimization levels, and compilation architectures. Two tasks, one-to-one detection and one-to-many detection, were performed on the dataset, and the impact of factors such as pre-training, merging instructions, and thresholds on recognition accuracy was explored. The experimental results and analysis indicate that the proposed binary-source code similarity detection based on intermediate representation can effectively solve the similarity detection problem between homologous binary functions and source code in various compilation scenarios.

    Figures and Tables | References | Related Articles | Metrics