Weight sharing for LMS algorithms: Convolutional Neural Networks Inspired Multichannel Adaptive Filtering

Clive Cheong Took, Danilo Mandic

Research output: Contribution to journalArticlepeer-review

4 Downloads (Pure)


Weight sharing has been a key to the success of convolutional neural networks, as it forces a neural network to detect common ‘local’ features across an image by applying the same weights across all input samples (pixels). This has been shown to resolve both the computational and performance issues as less data are required for training and there is a lower risk of overfitting the data. While such an approach has been instrumental in machine learning problems, it has not yet been adopted in large-scale signal processing paradigms. To this end, we study weight sharing for LMS type algorithms in a multi-channel setting and analyse its effect on the existence, uniqueness, and convergence of the solution. As a result, the proposed Weight Sharing Multichannel Least Mean Squares (WS-MLMS) algorithm minimises the sum of error squares across “channels” rather than the traditional across “time” minimisation. Rigorous analysis of our proposed WS-MLMS algorithm demonstrates that weight sharing leads to better convergence properties and enhanced capability to cope with a large number of channels in terms of both computational complexity and stability. Simulation studies on weight sharing, in scenarios as massive as 256 x 256 MIMO systems, support the approach.
Original languageEnglish
Article number103580
JournalDigital Signal Processing
Early online date6 May 2022
Publication statusPublished - Jul 2022


  • least mean square
  • weight sharing

Cite this