Neural Abstract Reasoner

Abstract reasoning and logic inference are difficult problems for neural networks, yet essential to their applicability in highly structured domains. In this work we demonstrate that a well known technique such as spectral regularization can significantly boost the capabilities of a neural learner. We introduce the Neural Abstract Reasoner (NAR), a memory augmented architecture capable of learning and using abstract rules. We show that, when trained with spectral regularization, NAR achieves 78:8% accuracy on the Abstraction and Reasoning Corpus, improving performance 4 times over the best known human hand-crafted symbolic solvers. We provide some intuition for the effects of spectral regularization in the domain of abstract reasoning based on theoretical generalization bounds and Solomonoff’s theory of inductive inference.

  • Published in:
    Knowledge Representation and Reasoning Meets Machine Learning Workshop (KR2ML) at the Conference on Neural Information Processing Systems (NeurIPS)
  • Type:
    Inproceedings
  • Authors:
    V. Kolev, B. Georgiev, S. Penkov
  • Year:
    2020

Citation information

V. Kolev, B. Georgiev, S. Penkov: Neural Abstract Reasoner, Knowledge Representation and Reasoning Meets Machine Learning Workshop (KR2ML) at the Conference on Neural Information Processing Systems (NeurIPS), 2020, https://doi.org/10.48550/arXiv.2011.09860, Kolev.etal.2020,