Abstract
Weight sharing has been a key to the success of convolutional neural networks, as it forces a neural network to detect common ‘local’ features across an image by applying the same weights across all input samples (pixels). This has been shown to resolve both the computational and performance issues as less data are required for training and there is a lower risk of overfitting the data. While such an approach has been instrumental in machine learning problems, it has not yet been adopted in large-scale signal processing paradigms. To this end, we study weight sharing for LMS type algorithms in a multi-channel setting and analyse its effect on the existence, uniqueness, and convergence of the solution. As a result, the proposed Weight Sharing Multichannel Least Mean Squares (WS-MLMS) algorithm minimises the sum of error squares across “channels” rather than the traditional across “time” minimisation. Rigorous analysis of our proposed WS-MLMS algorithm demonstrates that weight sharing leads to better convergence properties and enhanced capability to cope with a large number of channels in terms of both computational complexity and stability. Simulation studies on weight sharing, in scenarios as massive as 256 x 256 MIMO systems, support the approach.
Original language | English |
---|---|
Article number | 103580 |
Journal | Digital Signal Processing |
Volume | 127 |
Early online date | 6 May 2022 |
DOIs | |
Publication status | Published - Jul 2022 |
Keywords
- least mean square
- weight sharing