10 December 2025, Volume 25 Issue 12 Previous Issue   

For Selected: Toggle Thumbnails
Semantic Communication Security: Multi-Layered Threats and Countermeasures
JIN Zhigang, LUO Houyang, LIU Zepei, DING Yu
2025, 25 (12):  1827-1846.  doi: 10.3969/j.issn.1671-1122.2025.12.001
Abstract ( 17 )   HTML ( 6 )   PDF (25347KB) ( 10 )  

As a pivotal direction for next-generation communication technologies, semantic communication focuses on the meaning and utility of information, significantly enhancing communication efficiency and demonstrating revolutionary potential to drive a paradigm shift from modal transmission to knowledge-driven interaction. However, as systems increasingly rely on shared knowledge bases and deep learning models, the security boundary has extended from the traditional bit layer to the cognitive layer, exposing networks to novel threats such as cross-layer coupling and multi-modal poisoning. By systematically reviewing literature from the past three years, this study constructed a four-dimensional security classification architecture covering “Data Privacy, Model Endogeneity, Transmission Physics, and Knowledge Cognition” comprehensively analyzing the vulnerability mechanisms of semantic communication in adversarial environments. Specifically, this paper highlighted high-level cognitive threats, including Knowledge Graph structural poisoning and inference chain manipulation, and systematically evaluated emerging defense technologies based on cognitive immunity, dynamic trust graphs, and quantum empowerment. Furthermore, combined with typical scenarios such as the Internet of Things and the Internet of Vehicles, the practical deployment challenges of lightweight and anti-jamming security mechanisms were analyzed. Finally, this paper proposed future research directions, including endogenous cognitive security, zero-trust semantic architecture, and cross-domain fusion defense, aiming to provide theoretical support and technical guidance for building a secure, trustworthy, and robust 6G semantic communication system.

Figures and Tables | References | Related Articles | Metrics
Research on Protocol Fuzzing Technology Guided by Large Language Models
YANG Liqun, LI Zhen, WEI Chaoren, YAN Zhimin, QIU Yongxin
2025, 25 (12):  1847-1862.  doi: 10.3969/j.issn.1671-1122.2025.12.002
Abstract ( 17 )   HTML ( 3 )   PDF (18682KB) ( 16 )  

Security vulnerabilities in network protocol software occur frequently and pose serious threats to cyberspace security. Gray-box protocol fuzzing tools, such as AFLNet, have improved vulnerability detection by introducing coverage feedback and state modeling mechanisms. However, constrained by a persistent “semantic barrier”, these tools struggle to comprehend protocol syntax structures and contextual logic, resulting in limited testing efficiency. In recent years, large language models have demonstrated exceptional generalization and comprehension capabilities in tasks such as semantic modeling, contextual reasoning, and code generation, providing a promising pathway to overcome this barrier. This paper proposed LPF (LLMProFuzz), a protocol fuzzing framework guided by large language models, which addressed the limitations of traditional methods from three perspectives: firstly, automatically extracting protocol syntax templates through few-shot prompt engineering; secondly, designing a seed enrichment mechanism based on historical vulnerability characteristics to generate high-value initial cases that cover boundary and exceptional scenarios; thirdly, introducing a structure-aware mutation location selection strategy to increase the proportion of effective test cases. Experimental results on representative protocol stacks, including HTTP, FTP, and RTSP, demonstrate that LPF significantly outperforms baseline tools such as AFLNet and StateAFL in terms of code coverage, state coverage, and testing efficiency.

Figures and Tables | References | Related Articles | Metrics
An Efficient Method for Router Alias Identification with Active-Passive Collaboration
HU Dan, YANG Jilong
2025, 25 (12):  1863-1877.  doi: 10.3969/j.issn.1671-1122.2025.12.003
Abstract ( 6 )   HTML ( 5 )   PDF (17420KB) ( 3 )  

Router alias identification is one of the key technologies for accurately analyzing the network topology. Aiming at the problems of low efficiency and weak anti-interference ability of router alias identification in large-scale networks, this paper proposed an efficient router alias identification method with active-passive collaboration. First, a collaborative framework was constructed that integrated four types of active probing protocols (ICMP/TCP/UDP/SYN) and BGP/SNMP passive monitoring. The geographical scheduling was optimized through a quadtree index to reduce the cross-regional probing delay. Then, a dynamic task allocation model was designed. The calculation complexity was optimized from O(n2) to O(n) by using the load variance threshold control. Furthermore, an IPBH anti-interference algorithm was proposed. The noise interference was suppressed through a sliding window mechanism and dynamic threshold adjustment. A large number of experiments were carried out based on the commonly used CAIDA2023 dataset. The results show that the proposed method has obvious advantages in terms of identification efficiency and anti-interference compared with the original MBT typical router alias identification method. The speed of identifying every 10,000 routers was reduced from 42.3 seconds to 4.1 seconds. Through the local smoothing with a sliding window and dynamic threshold adjustment using the Kalman filter, the randomization noise of the IP identification and the interference of equal-cost multi-path are suppressed. An alias identification accuracy rate of 90.1% is achieved in a 25% noise environment, which is 7% to 25% higher than methods such as RadarGun, Hybrid Alias, and NoiseShield, etc.

Figures and Tables | References | Related Articles | Metrics
Detecting Poisoned Samples for Untargeted Backdoor Attacks
PANG Shuchao, LI Zhengxiao, QU Junyi, MA Ruhao, CHEN Hechang, DU Anan
2025, 25 (12):  1878-1888.  doi: 10.3969/j.issn.1671-1122.2025.12.004
Abstract ( 13 )   HTML ( 2 )   PDF (13826KB) ( 11 )  

Backdoor attacks, as an important way of data poisoning attacks, represent a significant threat to the reliability of datasets and the security of model training. Currently, the predominant defensive strategies are largely targeted-backdoor-attacks and lack of research on non-target backdoor attacks. This study, however, proposed a poisoned sample detection method for untargeted backdoor attacks. This method was to propose a black-box method based on predicted behavioral anomalies to detect potential untargeted backdoor examples. This method consisted of two modules: a poisoned-example-detection module based on predictive behavior anomalies, which detected suspicious examples based on the discrepancy inprediction behaviors betweenthe original and the reconstructed samples; and a diffusion-model-data-generation module for poisoned examples attacks, which generated a new dataset similar to the original dataset, and without triggers. The feasibility of the method is demonstrated through experiments involving different types of targetless backdoor attack and different generative models. The great potential and application value of generative models, especially diffusion models, in the field of backdoor detection and defense is also demonstrated.

Figures and Tables | References | Related Articles | Metrics
Secure Gain-Scheduling Method for Stochastic Nonlinear CPS Based on Dual-Domain Polynomial Framework
XIE Xiangpeng, SHAO Xingchen
2025, 25 (12):  1889-1900.  doi: 10.3969/j.issn.1671-1122.2025.12.005
Abstract ( 3 )   HTML ( 1 )   PDF (11848KB) ( 2 )  

In nonlinear cyber-physical systems(CPS), random switching behaviors and nonlinear characteristics often coexist, while complex transition probabilities and cyber attacks further threaten system safety and stability. This paper proposed a secure gain-scheduling method for stochastic nonlinear CPS based on dual-domain polynomial framework. This method was constructed by integrating fuzzy modeling with Markov jump systems, which enabled accurate characterization of nonlinear dynamics and stochastic switching phenomena. A polytopic reconstruction strategy in the structural domain was introduced to transform imprecise and partially unknown transition probabilities into a tractable form, thereby avoiding infeasibility under complex probabilistic environments. In the control design domain, homogeneous polynomial Lyapunov functions and controller structures were employed to effectively reduce conservatism and enhance robustness. Theoretical analysis results indicate that the proposed method guarantees exponential mean-square stability and performance optimization even under denial-of-service attacks and other cyber threats. Numerical simulations further verify the effectiveness of the proposed approach, showing its superiority in expanding the feasible solution domain and improving performance indices. The results provide a practical solution for secure control of CPS operating under complex probabilistic and adversarial conditions.

Figures and Tables | References | Related Articles | Metrics
A Proxy Ring Signature Scheme Based on SM9 Algorithm
ZHANG Xuefeng, WANG Kehang
2025, 25 (12):  1901-1913.  doi: 10.3969/j.issn.1671-1122.2025.12.006
Abstract ( 20 )   HTML ( 3 )   PDF (13274KB) ( 7 )  

In view of the application requirements of proxy ring signature in scenarios requiring high user identity protection, this paper proposed a proxy ring signature scheme based on SM9 digital signature algorithm. The scheme included six steps: setup, extract, proxydelegation, verifyproxy, sign and verify. The Key Generation Center uses the user ID to calculate the user’s key, and the original signer used the known information to calculate the proxy authorization, and then delegated the proxy authorization to the proxy signer. After the proxy signer verified the authentiecity of the authorization, the proxy ring signature was generated through the signature step, and the generated signature can be verified by the authentication algorithm. The scheme not only realizes the entrustment of signature rights, but also protects the privacy of signers through the anonymity of ring signatures. It is also proved that the scheme is unforgeable under adaptive selective message attacks under random prophecy model. The efficiency analysis shows that the proposed scheme has higher efficiency and better practicability.

Figures and Tables | References | Related Articles | Metrics
Multimodal Feature Fusion Encrypted Traffic Classification Model Based on Graph Variational Auto-Encoder
HAN Yiliang, PENG Yixuan, WU Xuguang, LI Yu
2025, 25 (12):  1914-1926.  doi: 10.3969/j.issn.1671-1122.2025.12.007
Abstract ( 4 )   HTML ( 3 )   PDF (16581KB) ( 4 )  

With the widespread application and continuous evolution of traffic encryption technology, improving the accuracy of encrypted traffic classification has become a critical technical challenge for ensuring network security and efficient management. Existing encrypted traffic classification methods use the same mechanism to extract header and payload features, failing to fully utilize the effective information with different characteristics in the header and payload, while ignoring the random features of ciphertext, leading to performance bottlenecks in classification accuracy. This paper proposed a multi-modal feature fusion encrypted traffic classification model (MFF-VGAE) based on graph variational auto-encoder. The model employed multi-modal feature fusion technology to extract and fuse effective information from the header and payload separately. Additionally, the model used a graph variational auto-encoder to map sample features into a random space following a normal distribution, generating augmented samples while learning the probability distribution of ciphertext data. Through training, the model improves classification accuracy and robustness while reducing computational complexity. Experimental results show that the proposed model outperforms current mainstream baseline models on the ISCX VPN-nonVPN and ISCX Tor-nonTor datasets, with a 9.1% reduction in computational complexity compared to the TFE-GNN model with a similar structure.

Figures and Tables | References | Related Articles | Metrics
Dynamic Handwritten Signature Verification Models Protection Method Based on Contrastive Learning
FU Zhangjie, CHEN Tianyu, CUI Qi
2025, 25 (12):  1927-1935.  doi: 10.3969/j.issn.1671-1122.2025.12.008
Abstract ( 4 )   HTML ( 3 )   PDF (11978KB) ( 2 )  

Dynamic handwritten signatures, as an important means of identity verification, usually compare the signature template with the signature to be verified and determine its authenticity based on a threshold. However, with the wide application of deep learning in the verification of handwritten signatures, the scale of models and training costs have significantly increased. Moreover, when models are provided as services, they are at risk of being illegally invoked or abused. To ensure that the signature verification model was only used by authorized users, this paper proposed a dynamic handwritten signature model protection method based on contrastive learning. This method jointly optimized the key embedding and signature verification model by constructing the contrastive loss between the signature containing the correct key and the original signature, as well as between the signature containing the correct key and the signature containing the random key, so that the model only maintained the verification capability for the signature containing the correct key, thereby effectively preventing unauthorized access. Meanwhile, this mechanism could achieve the confirmation of model ownership and the tracking of intellectual property rights. The experimental results based on the large dynamic signature dataset DeepSignDB show that under the condition of having 4 signature templates and including skilled forgery samples, the equal error rate of signatures with correct keys increases from 2.65% of the original model to 4.40%, and the original signatures and signatures with random keys increase to 16.98% and 16.51% respectively. It has achieved a significant enhancement in model security and traceability while still maintaining the original signature verification performance.

Figures and Tables | References | Related Articles | Metrics
Research on Community Detection and Core Node Discovery Based on Improved Louvain Algorithm
LIU Dahe, XIU Jiapeng, YANG Zhengqiu
2025, 25 (12):  1936-1947.  doi: 10.3969/j.issn.1671-1122.2025.12.009
Abstract ( 8 )   HTML ( 6 )   PDF (13372KB) ( 7 )  

In the context of community detection, this paper proposed a community detection method based on the improved Louvain algorithm, and conducted in-depth mining of core nodes in the community by constructing a comprehensive scoring model combined with multiple centrality indicators. The improved algorithm significantly improved the accuracy and efficiency of community detection by merging redundant edges and optimizing node division. In the weight selection of the comprehensive scoring model, the particle swarm algorithm was introduced to reduce the search complexity, thereby further improving the model’s performance in core node identification. In the experiment, this paper used multiple data set networks to verify the effectiveness of this method. The results show that the improved Louvain algorithm has better community structure identification capabilities in complex networks. At the same time, by comparing the core node identification effects of different centrality indicators, it was found that the comprehensive scoring model combined with the particle swarm optimization algorithm has obvious advantages in information dissemination and node importance assessment. This research result provides an effective technical means for community detection and core node identification, and has the potential to be further expanded to larger-scale networks.

Figures and Tables | References | Related Articles | Metrics
Blockchain-Based Privacy-Preserving Cross-Domain Authentication Protocol
ZHANG Guanping, WEI Fushan, CHEN Xi, GU Chunxiang
2025, 25 (12):  1948-1960.  doi: 10.3969/j.issn.1671-1122.2025.12.010
Abstract ( 6 )   HTML ( 1 )   PDF (14851KB) ( 2 )  

In the Internet of things environment, cross-domain authentication faces the problems of privacy protection and reliance on a trusted third party. To address these challenges, a blockchain-based privacy-preserving cross-domain authentication protocol was proposed. With the support of blockchain technology, this protocol realized identity authentication and key exchange between entities in different parameter domains, and effectively reduced the performance burden of the server and the user. Specifically, the user’s biometric vector was generated by the fuzzy extractor to generate a secret value, and the key was calculated by combining the lattice encryption technology, so as to complete the implicit identity authentication while protecting the user’s biometric privacy. In addition, the pseudo-identity, public key and public parameters of each trust domain generated by the user in the process of cross-domain access were uploaded to the blockchain to ensure the correctness of the verification results and the non-repudiation of the behavior of the participants in the protocol. In the random oracle model, based on the decisional learning with error problem and the discrete logarithm problem, the semantic security of the protocol under the polynomial adversary ability was proved. Compared with the similar protocols, the proposed protocol is compatible with the existing security mechanisms, and has low computation and communication overhead, thus providing a new solution for cross-domain authentication with high efficiency and security.

Figures and Tables | References | Related Articles | Metrics
Key Switching for Somewhat Homomorphic Encryption Based on RLWR
QIN Siying, SUN Bing, FU Shaojing, TANG Xiaomei
2025, 25 (12):  1961-1974.  doi: 10.3969/j.issn.1671-1122.2025.12.011
Abstract ( 11 )   HTML ( 1 )   PDF (13498KB) ( 4 )  

Currently, systematic research on key switching techniques in homomorphic encryption schemes based on RLWR remains relatively scarce. Existing work primarily focuses on the dimension expansion issue resulting from ciphertext multiplication, where key switching is used to restore the ciphertext dimension. However, there is still a lack of systematic investigation into key switching schemes required for operations such as rotation. This paper conducted a systematic study of key switching techniques by leveraging the structural characteristics of the RLWR-SHE scheme. The classical key switching scheme was improved to adapt it to the homomorphic computation requirements of RLWR-SHE. To address the issue of significant noise introduced by this scheme, this paper modify the key switching approach based on mainstream noise reduction strategies, namely the coefficient decomposition technique, modulus extension method, and their combination. Through theoretical analysis and comparison of the errors associated with different schemes, the hybrid key switching approach, which combined coefficient decomposition and modulus extension, offered the best noise control. However, it slightly increased computational complexity and dimension expansion. This study provides more flexible key switching options for RLWR-SHE, allowing users to select an appropriate scheme based on practical requirements such as error tolerance or efficiency demands.

Figures and Tables | References | Related Articles | Metrics
Privacy-Preserving Sorting Scheme Based on Paillier Homomorphic Encryption
WANG Houzhen, JIANG Haolang, LIU Jichen, TU Hang
2025, 25 (12):  1975-1989.  doi: 10.3969/j.issn.1671-1122.2025.12.012
Abstract ( 7 )   HTML ( 4 )   PDF (16336KB) ( 5 )  

In the era of big data, data sharing has become a key approach to release the potential of data and enhance business value. Financial institutions can enhance their capabilities in precision marketing, fraud detection, and risk management by collaborating with communication operators to jointly compute users’ communication data. However, how to ensuring privacy protection for users in data sharing remains an urgent challenge that needs to be addressed. This paper proposed a privacy-preserving sorting scheme based on Paillier homomorphic encryption, and rigorously proved its correctness and security. The proposed scheme not only enabled weighted summation operations in ciphertext, but also facilitated efficient sorting in ciphertext to achieve availability and invisibility of shared data. Compared with existing schemes, this method is more efficient in ciphertext comparison and suitable for large-scale privacy data protection sorting. Additionally, the paper uses the application scenario of recommending high-quality bank customers as an example, and verifies the correctness and practicability of the proposed scheme through simulation experiments.

Figures and Tables | References | Related Articles | Metrics
Sanitize Processing and Recognition Method Driven by Large Language Model
MENG Hui, MAO Linlin, PENG Juzhi
2025, 25 (12):  1990-1998.  doi: 10.3969/j.issn.1671-1122.2025.12.013
Abstract ( 5 )   HTML ( 2 )   PDF (10844KB) ( 3 )  

Static taint analysis plays a crucial role in automatically discovering data-flow related security vulnerabilities, but traditional rule-based or symbol-based approaches often suffer from high false positive and false negative rates in real-world engineering settings due to custom sanitizer functions, context-dependent validation/escaping logic, and dynamic code features. To address this problem, this paper proposed a sanitize processing and recognition method driven by large language model: code and its calling context were mapped into model-understandable descriptions via a semantic transformation operator; structured prompts guided the large language model to output determinations along with evidence-based explanations; and confidence thresholds, caching, and selective symbolic-execution fallback were combined to improve reliability and engineering practicality. Evaluation on three public Java Web benchmark datasets shows that the proposed method significantly outperforms rule-based matching method and AST stain analysis method in sanitize processing and recognition, achieving at least 89.4% identification accuracy across different vulnerability scenarios.

Figures and Tables | References | Related Articles | Metrics