Activity Sparsity Complements Weight Sparsity for Efficient RNN Inference

Rishav Mukherji, Mark Schöne, Khaleelulla Khan Nazeer, Christian Mayr, Anand Subramoney

Research output: Contribution to conferenceOtherpeer-review


Artificial neural networks open up unprecedented machine learning capabilities at the cost of ever growing computational requirements. Sparsifying the parameters, often achieved through weight pruning, has been identified as a powerful technique to compress the number of model parameters and reduce the computational operations of neural networks. Yet, sparse activations, while omnipresent in both biological neural networks and deep learning systems, have not been fully utilized as a compression technique in deep learning. Moreover, the interaction between sparse activations and weight pruning is not fully understood. In this work, we demonstrate that activity sparsity can compose multiplicatively with parameter sparsity in a recurrent neural network model based on the GRU that is designed to be activity sparse. We achieve up to 20× reduction of computation while maintaining perplexities below 60 on the Penn Treebank language modeling task. This magnitude of reduction has not been achieved previously with solely sparsely connected LSTMs, and the language modeling performance of our model has not been achieved previously with any sparsely activated recurrent neural networks or spiking neural networks. Neuromorphic computing devices are especially good at taking advantage of the dynamic activity sparsity, and our results provide strong evidence that making deep learning models activity sparse and porting them to neuromorphic devices can be a viable strategy that does not compromise on task performance. Our results also drive further convergence of methods from deep learning and neuromorphic computing for efficient machine learning.
Original languageEnglish
Publication statusPublished - 16 Dec 2023
EventML with New Compute Paradigms: at NeurIPS 2023 - New Orleans, United States
Duration: 16 Dec 2023 → …


WorkshopML with New Compute Paradigms
Abbreviated titleMLNCP
Country/TerritoryUnited States
CityNew Orleans
Period16/12/23 → …
Internet address

Cite this