Benchmarking in Neuro-Symbolic {AI}
Neural-symbolic ({NeSy}) {AI} has gained a lot of popularity by enhancing learning models with explicit reasoning capabilities. Both new systems and new benchmarks are constantly introduced and used to evaluate learning and reasoning skills. The large variety of systems and benchmarks, however, makes it difficult to establish a fair comparison among the various frameworks, let alone a unifying set of benchmarking criteria. This paper analyzes the state-of-the-art in benchmarking {NeSy} systems, studies its limitations, and proposes ways to overcome them. We categorize popular neural-symbolic frameworks into three groups: model-theoretic, proof-theoretic fuzzy, and proof-theoretic probabilistic systems. We show how these three categories have distinct strengths and weaknesses, and how this is reflected in the type of tasks and benchmarks to which they are applied.
- Published in:
Learning and Reasoning (ILP 2024) - Type:
Inproceedings - Authors:
- Year:
2026
Citation information
: Benchmarking in Neuro-Symbolic {AI}, Learning and Reasoning (ILP 2024), 2026, 238--249, Springer Nature Switzerland, Manhaeve.etal.2026a,
@Inproceedings{Manhaeve.etal.2026a,
author={Manhaeve, Robin; Giannini, Francesco; Ali, Mehdi; Azzolini, Damiano; Bizzarri, Alice; Borghesi, Andrea; Bortolotti, Samuele; De Raedt, Luc; Dhami, Devendra; Diligenti, Michelangelo; Dumančić, Sebastijan; Faltings, Boi; Gentili, Elisabetta; Gerevini, Alfonso; Gori, Marco; Guns, Tias; Homola, Martin; Kersting, Kristian; Lehmann, Jens; Lombardi, Michele; Lorello, Luca; Marconato, Emanuele; Melacci, Stefano; Passerini, Andrea; Paul, Debjit; Riguzzi, Fabrizio; Teso, Stefano; Yorke-Smith, Neil; Lippi, Marco},
title={Benchmarking in Neuro-Symbolic {AI}},
booktitle={Learning and Reasoning (ILP 2024)},
pages={238--249},
publisher={Springer Nature Switzerland},
year={2026},
abstract={Neural-symbolic ({NeSy}) {AI} has gained a lot of popularity by enhancing learning models with explicit reasoning capabilities. Both new systems and new benchmarks are constantly introduced and used to evaluate learning and reasoning skills. The large variety of systems and benchmarks, however, makes it difficult to establish a fair comparison among the various frameworks, let alone a unifying...}}