信息网络安全 ›› 2020, Vol. 20 ›› Issue (9): 87-91.doi: 10.3969/j.issn.1671-1122.2020.09.018

• 入选论文 • 上一篇    下一篇

AI系统的安全测评和防御加固方案

王文华(), 郝新, 刘焱, 王洋   

  1. 百度安全,北京 100085
  • 收稿日期:2020-07-16 出版日期:2020-09-10 发布日期:2020-10-15
  • 通讯作者: 王文华 E-mail:wangwenhua@baidu.com
  • 作者简介:王文华(1990—),女,陕西,工程师,硕士,主要研究方向为AI模型安全|郝新(1983—),男,黑龙江,工程师,本科,主要研究方向为AI模型安全|刘焱(1985—),男,湖北,工程师,本科,主要研究方向为AI模型安全|王洋(1984—),男,山东,工程师,硕士,主要研究方向为AI模型安全

The Safety Evaluation and Defense Reinforcement of the AI System

WANG Wenhua(), HAO Xin, LIU Yan, WANG Yang   

  1. Baidu Security Department, Beijing 100085, China
  • Received:2020-07-16 Online:2020-09-10 Published:2020-10-15
  • Contact: Wenhua WANG E-mail:wangwenhua@baidu.com

摘要:

深度学习模型在很多人工智能任务上已取得出色表现,但精心设计的对抗样本却能欺骗训练有素的模型,诱导模型作出错误判断。对抗攻击的成功使得AI系统的可用性遭受质疑。为了提升AI系统安全性和鲁棒性,文章遵循安全开发流程,提出针对AI系统的安全测评和防御加固方案。该方案通过精准检测和拦截对抗攻击、科学评估模型鲁棒性、实时监控新型对抗攻击等措施,提升系统抵御对抗攻击的能力,帮助开发人员构建更安全的AI系统。

关键词: 深度学习, 对抗攻击, 安全开发流程

Abstract:

Deep learning models have performed well on many AI tasks, but elaborate adversarial samples can trick well-trained models into making false judgments. The success of the adversarial attack calls into question the usability of the AI system. In order to improve the security and robustness, the paper follow the security development lifecycle and propose a security evaluation and defense reinforcement scheme for the AI system. The scheme improves the system's ability to resist attacks and helps developers build a more secure AI system through measures such as accurate detection and interception of adversarial attacks, scientific evaluation of the model's robustness, and real-time monitoring of new adversarial attacks.

Key words: deep learning, adversarial attack, security development lifecycle

中图分类号: