自动化技术、计算机技术 |
|
|
|
|
基于滤波器裁剪的卷积神经网络加速算法 |
李浩( ),赵文杰*( ),韩波 |
浙江大学 航空航天学院,浙江 杭州 310027 |
|
Convolutional neural network acceleration algorithm based on filters pruning |
Hao LI( ),Wen-jie ZHAO*( ),Bo HAN |
College of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027, China |
1 |
KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks [C] // International Conference on Neural Information Processing Systems. Lake Tahoe: Curran Associates Inc., 2012.
|
2 |
GRAVES A, SCHMIDHUBER J Framewise phoneme classification with bidirectional LSTM and other neural network architectures[J]. Neural Netw, 2005, 18 (5): 602- 610
|
3 |
SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision [C] // International Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 2818-2826.
|
4 |
DENIL M, SHAKIBI B, DINH L, et al. Predicting parameters in deep learning [C] // Advances in Neural Information Processing Systems. Lake Tahoe: MIT, 2013: 2148-2156.
|
5 |
SRINIVAS S, BABU R V. Data-free parameter pruning for deep neural networks [EB/OL]. [2018-09-06]. http://arxiv.org/abs/1507.06149.
|
6 |
HAN S, POOL J, TRAN J, et al. Learning both weights and connections for efficient neural network [C] // Advances in Neural Information Processing Systems. Montreal: MIT, 2015: 1135-1143.
|
7 |
MARIET Z, SRA S. Diversity networks: neural network compression using determinantal point processes [EB/OL]. [2018-05-13]. http://arxiv.org/abs/1511.05077.
|
8 |
HAN S, MAO H, DALLY W J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding [EB/OL]. [2018-08-09]. http://arxiv.org/abs/1510.00149.
|
9 |
SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [EB/OL]. [2018-07-22]. http://arxiv.org/abs/1409.1556.
|
10 |
IANDOLA F N, HAN S, MOSKEWICZ M W, et al. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and < 0.5 mb model size [EB/OL]. [2018-07-14]. http://arxiv.org/abs/1602.07360.
|
11 |
HAN S, LIU X, MAO H, et al EIE: efficient inference engine on compressed deep neural network[J]. ACM Sigarch Computer Architecture News, 2016, 44 (3): 243- 254
doi: 10.1145/3007787.3001163
|
12 |
MATHIEU M, HENAFF M, LECUN Y. Fast training of convolutional networks through FFTs [EB/OL]. [2018-09-03]. http://arxiv.org/abs/1312.5851.
|
13 |
RASTEGARI M, ORDONEZ V, REDMON J, et al. XNOR-Net: ImageNet classification using binary convolutional neural networks [C] // European Conference on Computer Vision. Cham: Springer, 2016: 525-542.
|
14 |
WEN W, WU C, WANG Y, et al. Learning structured sparsity in deep neural networks [C] // Advances in Neural Information Processing Systems. Barcelona: MIT, 2016: 2074-2082.
|
15 |
LI H, KADAV A, DURDANOVIC I, et al. Pruning filters for efficient convents [EB/OL]. [2018-09-11]. http://arxiv.org/abs/1608.08710.
|
16 |
MITTAL D, BHARDWAJ S, KHAPRA M M, et al. Recovering from random pruning: on the plasticity of deep convolutional neural networks [EB/OL]. [2018-09-12]. http://arxiv.org/abs/1801.10447.
|
17 |
ZHU M, GUPTA S. To prune, or not to prune: exploring the efficacy of pruning for model compression [EB/OL]. [2018-06-23]. http://arxiv.org/abs/1710.01878.
|
18 |
HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778.
|
19 |
ZAGORUYKO S. 92.45% on CIFAR-10 in Torch[EB/OL].[2018-07-30]. http://torch.ch/blog/2015/07/30/cifar.html.
|
20 |
IOFFE S, SZEGEDY C. Batch normalization: accelerating deep network training by reducing internal covariate shift [EB/OL]. [2018-07-16]. http://arxiv.org/abs/1502.03167.
|
21 |
HU H, PENG R, TAI Y W, et al. Network trimming: a data-driven neuron pruning approach towards efficient deep architectures [EB/OL]. [2018-07-19]. http://arxiv.org/abs/1607.03250.
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|