Training Multimodal Systems for Classification with Multiple Objectives

We learn about the world from a diverse range of sensory information. Automated systems lack this ability as investigation has centred on processing information presented in a single form. Adapting architectures to learn from multiple modalities creates the potential to learn rich representations of the world – but current multimodal systems only deliver marginal improvements on unimodal approaches. Neural networks learn sampling noise during training with the result that performance on unseen data is degraded. This research introduces a second objective over the multimodal fusion process learned with variational inference. Regularisation methods are implemented in the inner training loop to control variance and the modular structure stabilises performance as additional neurons are added to layers. This framework is evaluated on a multilabel classification task with textual and visual inputs to demonstrate the potential for multiple objectives and probabilistic methods to lower variance and improve generalisation.

  • Published in:
    CLEOPATRA Workshop at ESWC International Workshop on Cross-lingual Event-centric Open Analytics (CLEOPATRA) at the Extended Semantic Web Conference (ESWC)
  • Type:
    Inproceedings
  • Authors:
    J. Armitage, S. Thakur, R. Tripathi, J. Lehmann, M. Maleshkova
  • Year:
    2020

Citation information

J. Armitage, S. Thakur, R. Tripathi, J. Lehmann, M. Maleshkova: Training Multimodal Systems for Classification with Multiple Objectives, International Workshop on Cross-lingual Event-centric Open Analytics (CLEOPATRA) at the Extended Semantic Web Conference (ESWC), CLEOPATRA Workshop at ESWC, 2020, https://doi.org/10.48550/arXiv.2008.11450, Armitage.etal.2020a,