Netinfo Security ›› 2020, Vol. 20 ›› Issue (9): 87-91.doi: 10.3969/j.issn.1671-1122.2020.09.018

Previous Articles     Next Articles

The Safety Evaluation and Defense Reinforcement of the AI System

WANG Wenhua(), HAO Xin, LIU Yan, WANG Yang   

  1. Baidu Security Department, Beijing 100085, China
  • Received:2020-07-16 Online:2020-09-10 Published:2020-10-15
  • Contact: Wenhua WANG E-mail:wangwenhua@baidu.com

Abstract:

Deep learning models have performed well on many AI tasks, but elaborate adversarial samples can trick well-trained models into making false judgments. The success of the adversarial attack calls into question the usability of the AI system. In order to improve the security and robustness, the paper follow the security development lifecycle and propose a security evaluation and defense reinforcement scheme for the AI system. The scheme improves the system's ability to resist attacks and helps developers build a more secure AI system through measures such as accurate detection and interception of adversarial attacks, scientific evaluation of the model's robustness, and real-time monitoring of new adversarial attacks.

Key words: deep learning, adversarial attack, security development lifecycle

CLC Number: