Exploiting Activation Sparsity for Fast CNN Inference on Mobile GPUs

Chanyoung Oh, Junhyuk So, Sumin Kim, Youngmin Yi

Research output: Contribution to journalArticlepeer-review

4 Scopus citations


Over the past several years, the need for on-device deep learning has been rapidly increasing, and efficient CNN inference on mobile platforms has been actively researched. Sparsity exploitation has been one of the most active research themes, but the studies mostly focus on weight sparsity by weight pruning. Activation sparsity, on the contrary, requires compression at runtime for every input tensor. Hence, the research on activation sparsity mainly targets NPUs that can efficiently process this with their own hardware logic. In this paper, we observe that it is difficult to accelerate CNN inference on mobile GPUs with natural activation sparsity and that the widely used CSR-based sparse convolution is not sufficiently effective due to the compression overhead. We propose several novel sparsification methods that can boost activation sparsity without harming accuracy. In particular, we selectively sparsify some layers with an extremely high sparsity and adopt sparse convolution or dense convolution depending on the layers. Further, we present an efficient sparse convolution method without compression and demonstrate that it can be faster than the CSR implementation. With ResNet-50, we achieved 1.88 speedup compared to TFLite on a Mali-G76 GPU.

Original languageEnglish
Article number77
JournalTransactions on Embedded Computing Systems
Issue number5s
StatePublished - Oct 2021


  • On-device deep learning
  • convolutional neural network
  • sparsity


Dive into the research topics of 'Exploiting Activation Sparsity for Fast CNN Inference on Mobile GPUs'. Together they form a unique fingerprint.

Cite this