With the rapid development of artificial intelligence technology, large language models (LLMs) have emerged in many fields such as scientific research, education, finance, and healthcare due to their powerful natural language processing capabilities. However, as LLMs are widely adopted, they bring a series of security issues: risks of bias and discrimination, potential generation of harmful content, threats to user privacy leakage, risks of misleading information dissemination, and vulnerabilities to malicious adversarial attacks. These risks may harm users and even impact social stability and ethical order, necessitating comprehensive security testing and evaluation of LLMs. This article primarily focused on current research on LLM security assessment, categorizing common security risks and reviewing mainstream security evaluation techniques. It also introduced relevant assessment methods, metrics, commonly used datasets, and tools, while summarizing key security evaluation standards and guidelines developed globally. Additionally, the paper discussed the technical concepts, principles, and implementation mechanisms of safety alignment, along with its evaluation framework. Finally, by analyzing the challenges faced in current LLM security assessment, it outlined future technological trends and research directions, aiming to provide guidance for academic and industrial research and practices.