[1] |
DENG Jia, DONG Wei, SOCHER R, et al. Imagenet: A Large-Scale Hierarchical Image Database[C]// IEEE. 2009 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2009: 248-255.
|
[2] |
BROWN T, MANN B, RYDER N, et al. Language Models are Few-Shot Learners[J]. Neural Information Processing Systems, 2020(33): 1877-1901.
|
[3] |
YOSINSKI J, CLUNE J, NGUYEN A, et al. Understanding Neural Networks through Deep Visualization[EB/OL]. (2021-06-22)[2023-08-13]. https://arxiv.org/abs/1506.06579.
|
[4] |
HITAJ D, MANCINI L V. Have You Stolen My Model? Evasion Attacks against Deep Neural Network Watermarking Techniques[EB/OL]. (2018-09-03)[2023-08-13]. https://arxiv.org/abs/1809.00615.
|
[5] |
TRAMÈR F, ZHANG Fan, JUELS A, et al. Stealing Machine Learning Models via Prediction APIs[C]// USENIX. 25th USENIX Security Symposium (USENIX Security 16). Berkley: USENIX, 2016: 601-618.
|
[6] |
ANDRIUSHCHENKO M, CROCE F, FLAMMARION N, et al. Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search[C]// Springer. European Conference on Computer Vision. Heidelberg: Springer, 2020: 484-501.
|
[7] |
AIKEN W, KIM H, WOO S, et al. Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks[EB/OL]. (2021-07-01)[2023-08-13]. https://www.sciencedirect.com/science/article/abs/pii/S0167404821001012?via%3Dihub.
|
[8] |
WANG Bolun, YAO Yuanshun, SHAN S, et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks[C]// IEEE. 2019 IEEE Symposium on Security and Privacy (SP). New York: IEEE, 2019: 707-723.
|
[9] |
LIU Kang, DOLAN G B, GARG S. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks[C]// Springer. International Symposium on Research in Attacks, Intrusions, and Defenses. Heidelberg: Springer, 2018: 273-294.
|
[10] |
CHEN B, CARVALHO W, BARACALDO N, et al. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering[EB/OL]. (2018-11-09)[2023-08-13]. https://arxiv.org/abs/1811.03728.
|
[11] |
CHOU E, TRAMÈR F, PELLEGRINO G, et al. Sentinet: Detecting Physical Attacks against Deep Learning Systems[EB/OL]. (2018-12-02)[2023-08-13]. https://arxiv.org/abs/1812.00292.
|
[12] |
CHEN Huili, FU Cheng, ZHAO Jishen, et al. DeepInspect: A Black-Box Trojan Detection and Mitigation Framework for Deep Neural Networks[EB/OL]. (2019-08-10)[2023-08-13]. https://dl.acm.org/doi/10.5555/3367471.3367691.
|
[13] |
GUO Wenbo, WANG Lun, XING Xinyu, et al. Tabor: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems[EB/OL]. (2019-08-08)[2023-08-13]. https://arxiv.org/abs/1908.01763.
|
[14] |
LIU Zhuang, SUN Mingjie, ZHOU Tinghui, et al. Rethinking the Value of Network Pruning[EB/OL]. (2018-10-11)[2023-08-13]. https://arxiv.org/abs/1810.05270.
|
[15] |
HUANG Xijie, ALZANTOT M, SRIVASTAVA M. NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations[EB/OL]. (2019-11-18)[2023-08-13]. https://arxiv.org/abs/1911.07399.
|
[16] |
UCHIDA Y, NAGAI Y, SAKAZAWA S, et al. Embedding Watermarks into Deep Neural Networks[C]// ACM. The 2017 ACM on International Conference on Multimedia Retrieval. New York: ACM, 2017: 269-277.
|
[17] |
ZHANG Jie, CHEN Dongdong, LIAO Jing, et al. Deep Model Intellectual Property Protection via Deep Watermarking[EB/OL]. (2021-03-08)[2023-08-13]. https://arxiv.org/abs/2103.04980.
|
[18] |
GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and Harnessing Adversarial Examples[EB/OL]. (2014-12-20)[2023-08-13]. https://arxiv.org/abs/1412.6572.
|
[19] |
LOU Xiaoxuan, GUO Shangwei, LI Jiwei, et al. Ownership Verification of DNN Architectures via Hardware Cache Side Channels[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(11): 8078-8093.
|
[20] |
GU Tianyu, DOLAN G B, GARG S. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain[J]. (2017-08-22)[2023-08-13]. https://arxiv.org/abs/1708.06733.
|
[21] |
CHEN Wenlin, WILSON J T, TYREE S, et al. Compressing Neural Networks with the Hashing Trick[C]// ACM. The 32nd International Conference on International Conference on Machine Learning. New York: ACM, 2015: 2285-2294.
|
[22] |
HU Hailong, PANG Jun. Stealing Machine Learning Models: Attacks and Countermeasures for Generative Adversarial Networks[C]// ACM. Annual Computer Security Applications Conference Virtual Event (ACSAC’21). New York: ACM, 2021: 1-16.
|
[23] |
BATINA L, BHASIN S, JAP D, et al. CSI Neural Network: Using Side-Channels to Recover Your Artificial Neural Network Information[EB/OL]. (2018-10-22)[2023-08-13]. https://dl.acm.org/doi/10.1145/3485832.3485838.
|
[24] |
WEI Lingxiao, LUO Bo, LI Yu, et al. I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators[C]// ACM. The 34th Annual Computer Security Applications Conference. New York: ACM, 2018: 393-406.
|