TY - JOUR
T1 - Exploiting Activation Sparsity for Fast CNN Inference on Mobile GPUs
AU - Oh, Chanyoung
AU - So, Junhyuk
AU - Kim, Sumin
AU - Yi, Youngmin
N1 - Publisher Copyright:
© 2021 Association for Computing Machinery.
PY - 2021/10
Y1 - 2021/10
N2 - Over the past several years, the need for on-device deep learning has been rapidly increasing, and efficient CNN inference on mobile platforms has been actively researched. Sparsity exploitation has been one of the most active research themes, but the studies mostly focus on weight sparsity by weight pruning. Activation sparsity, on the contrary, requires compression at runtime for every input tensor. Hence, the research on activation sparsity mainly targets NPUs that can efficiently process this with their own hardware logic. In this paper, we observe that it is difficult to accelerate CNN inference on mobile GPUs with natural activation sparsity and that the widely used CSR-based sparse convolution is not sufficiently effective due to the compression overhead. We propose several novel sparsification methods that can boost activation sparsity without harming accuracy. In particular, we selectively sparsify some layers with an extremely high sparsity and adopt sparse convolution or dense convolution depending on the layers. Further, we present an efficient sparse convolution method without compression and demonstrate that it can be faster than the CSR implementation. With ResNet-50, we achieved 1.88 speedup compared to TFLite on a Mali-G76 GPU.
AB - Over the past several years, the need for on-device deep learning has been rapidly increasing, and efficient CNN inference on mobile platforms has been actively researched. Sparsity exploitation has been one of the most active research themes, but the studies mostly focus on weight sparsity by weight pruning. Activation sparsity, on the contrary, requires compression at runtime for every input tensor. Hence, the research on activation sparsity mainly targets NPUs that can efficiently process this with their own hardware logic. In this paper, we observe that it is difficult to accelerate CNN inference on mobile GPUs with natural activation sparsity and that the widely used CSR-based sparse convolution is not sufficiently effective due to the compression overhead. We propose several novel sparsification methods that can boost activation sparsity without harming accuracy. In particular, we selectively sparsify some layers with an extremely high sparsity and adopt sparse convolution or dense convolution depending on the layers. Further, we present an efficient sparse convolution method without compression and demonstrate that it can be faster than the CSR implementation. With ResNet-50, we achieved 1.88 speedup compared to TFLite on a Mali-G76 GPU.
KW - On-device deep learning
KW - convolutional neural network
KW - sparsity
UR - http://www.scopus.com/inward/record.url?scp=85115836730&partnerID=8YFLogxK
U2 - 10.1145/347708
DO - 10.1145/347708
M3 - Article
AN - SCOPUS:85115836730
SN - 1539-9087
VL - 20
JO - Transactions on Embedded Computing Systems
JF - Transactions on Embedded Computing Systems
IS - 5s
M1 - 77
ER -