Methods of Training Task Decompositions in Gated Modular Neural Networks

Yamuna Krishnamurthy

Research output: ThesisDoctoral Thesis

21 Downloads (Pure)

Abstract

Mixture of experts (MoE), introduced over 20 years ago, is the simplest gated modular neural network architecture. The gate in the MoE architecture learns task decompositions and individual experts (modules) learn simpler functions appropriate to the gate’s task decomposition. This could inherently make MoE interpretable as errors can be attributed either to gating or to individual experts thereby providing either a gate or expert level diagnosis. Due to the specialization of experts they could modularly be transfered to other tasks. However, our initial experiments showed that the original MoE architecture and its end-to-end expert and gate training method does not guarantee intuitive task decompositions and expert utilization, indeed it can fail spectacularly even for simple data such as MNIST. This thesis therefore explores task decompositions among experts by the gate in existing MoE architectures and training methods and demonstrates how they can fail for even simple datasets without additional regularizations. We then propose five novel MoE training algorithms and MoE architectures: (1) Dual temperature gate and expert training that uses a softer gate distribution for training experts and a harder gate distribution to train the gate; (2) Two no- gate expert training algorithms where the experts are trained without a gate: (a) loudest expert method which selects the expert with the lowest estimate of its own loss for the sample both during training and inference; and (b) peeking expert algorithm that selects and trains the expert with the best prediction probability for the target class of a sample during training. A gate is then reverse distilled from the pre-trained experts for conditional computation during inference; (3) Attentive gating MoE architecture that computes the gate probabilities by attending to the expert outputs with additional attention weights during training. We then distill the trained attentive gate model to a simpler original MoE model for conditional computation during inference; and (4) Expert loss gating MoE architecture where the gate output is not the expert distribution but the expert log loss.
We also propose a novel flexible data driven soft constraint, Ls, that uses similarity between samples to regulate the gate’s expert distribution. We empirically validate our methods on MNIST, FashionMNIST and CIFAR-10 datasets. The empirical results show that our novel training and regularization algorithms outperform benchmark MoE training methods.
Original languageEnglish
QualificationPh.D.
Awarding Institution
  • Royal Holloway, University of London
Supervisors/Advisors
  • Watkins, Chris, Supervisor
Award date1 Nov 2023
Publication statusUnpublished - 2024

Keywords

  • machine learning
  • deep learning
  • mixture of experts
  • modular deep learning

Cite this