TY - CONF
T1 - Weight sharing for single-channel LMS
AU - Ibunu, Shamahil
AU - Moore, Karl
AU - Cheong Took, Clive
AU - Mandic, Danilo
PY - 2023/7/2
Y1 - 2023/7/2
N2 - Constraining a group of taps of an adaptive filter to a single value may seem like a futile task, as weight sharing reduces the degree of freedom of the algorithm, and there are no obvious advantages for implementing such an update scheme. On the other hand, weight sharing is popular in deep learning and underpins the success of convolutional neural networks (CNNs) in numerous applications. To this end, we investigate the advantages of weight sharing in single-channel least mean square (LMS), and propose weight sharing LMS (WSLMS) and partial weight sharing LMS (PWS). In particular, we illustrate how weight sharing can lead to numerous benefits such as an enhanced robustness to noise and a computational cost that is independent of the filter length. Simulations support the analysis
AB - Constraining a group of taps of an adaptive filter to a single value may seem like a futile task, as weight sharing reduces the degree of freedom of the algorithm, and there are no obvious advantages for implementing such an update scheme. On the other hand, weight sharing is popular in deep learning and underpins the success of convolutional neural networks (CNNs) in numerous applications. To this end, we investigate the advantages of weight sharing in single-channel least mean square (LMS), and propose weight sharing LMS (WSLMS) and partial weight sharing LMS (PWS). In particular, we illustrate how weight sharing can lead to numerous benefits such as an enhanced robustness to noise and a computational cost that is independent of the filter length. Simulations support the analysis
U2 - 10.1109/SSP53291.2023.10207966
DO - 10.1109/SSP53291.2023.10207966
M3 - Paper
SP - 185
EP - 189
ER -