Dear Editors:
We are submitting our manuscript “A Mathematical Framework and a Suite of Learning Techniques
for Neural-Symbolic Systems” to the Neurosymbolic Artificial Intelligence journal for review.
Our manuscript introduces Neural-Symbolic Energy-Based Models (NeSy-EBMs), a unifying
mathematical framework for neural-symbolic (NeSy) systems. Specifically, NeSy-EBMs formalize
the architecture of NeSy systems as a composition of neural and symbolic functions and prediction
and reasoning as mathematical optimization problems. We utilize our framework to introduce a
suite of learning losses and algorithms and to create a principled taxonomy of NeSy modeling
paradigms based on reasoning capabilities. Additionally, we present Neural Probabilistic Soft
Logic (NeuPSL), an open-source NeSy-EBM codebase that we use in our empirical analysis.
Moreover, all of the code and data required to reproduce the results in our extensive empirical
analysis is open-sourced and available from a public GitHub repository.
The contributions of this manuscript are essential for unifying NeSy research and developing
general NeSy algorithms and tools. NeSy-EBMs fill the growing need for a foundation for empow-
ering machine learning models with domain knowledge and consistent reasoning capabilities.
For instance, general NeSy inference and learning algorithms have been developed through
NeSy-EBMs [Dickens et al., 2024a] along with new open-source NeSy implementations [Pryor
et al., 2023a]. Moreover, the NeSy-EBM framework is a common ground for connecting NeSy to
the broader machine learning community. Finally, NeuPSL is a scalable and expressive tool that
is already facilitating real-world applications of NeSy, including dialog structure induction [Pryor
et al., 2023b], natural language question answering [Dickens et al., 2024b], and autonomous
agent navigation and exploration [Zhou et al., 2023]. The NeSy-EBM framework is quickly
establishing itself as a powerful tool for unifying and formalizing connections and capabilities of
NeSy models and for developing new impactful NeSy architectures and algorithms.
This paper integrates and expands our works Pryor et al. [2023a], Dickens et al. [2024a], and
Dickens et al. [2024b] on NeSy integrations and applications via the NeSy-EBM framework with
multiple novel unpublished contributions:
• We integrate the definition of NeSy-EBMs introduced in Pryor et al. [2023a] with the new
mathematical abstraction of symbolic potentials proposed for NeSy modeling patterns in
Dickens et al. [2024b].
• We provide an extensive literature review of NeSy applications and an analysis of related
work that establishes NeSy-EBMs as a unifying paradigm.
• We derive gradients for a general class of learning loss with respect to neural and symbolic
parameters. Previous works [Dickens et al., 2024a,b] derive ad-hoc learning loss gradients.
• We introduce a stochastic variant of NeSy-EBMs to support principled usage of stochastic
policy methods for end-to-end NeSy learning.
• We perform an extensive empirical evaluation exploring NeSy for constraint satisfaction and
joint reasoning, fine-tuning and adaptation, few-shot reasoning, and semi-supervision. Our
evaluation includes unpublished NeuPSL results on multiple datasets, including warcraft
pathfinding and autonomous vehicle situation awareness.
We give thorough motivation and related works sections to position our theoretical framework.
Moreover, we supplement our ideas and definitions with examples and detailed figures. Our
algorithms and theorems are derived with the necessary background and detail.
Sincerely,
Charles Dickens (University of California Santa Cruz)
Connor Pryor (University of California Santa Cruz)
Changyu Gao (University of Wisconsin Madison)
Alon Albalak (University of California Santa Barbara)
Eriq Augustine (University of California Santa Cruz)
William Wang (University of California Santa Barbara)
Stephen Wright (University of Wisconsin Madison)
Lise Getoor (University of California Santa Cruz)
References
C. Dickens, C. Gao, C. Pryor, S. Wright, and L. Getoor. Convex and bilevel optimization for
neuro-symbolic inference and learning. In ICML, 2024a.
C. Dickens, C. Pryor, and L. Getoor. Modeling patterns for neural-symbolic reasoning using
energy-based models. In AAAI Spring Symposium on Empowering Machine Learning and Large
Language Models with Domain and Commonsense Knowledge, 2024b.
C. Pryor, C. Dickens, E. Augustine, A. Albalak, W. Y. Wang, and L. Getoor. Neupsl: Neural
probabilistic soft logic. In IJCAI, 2023a.
C. Pryor, Q. Yuan, J. Z. Liu, S. M. Kazemi, D. Ramachandran, T. Bedrax-Weiss, and L. Getoor.
Using domain knowledge to guide dialog structure induction via neural probabilistic soft logic.
In Annual Meeting of the Association for Computational Linguistics (ACL), Toronto, Canada,
2023b.
K. Zhou, K. Zheng, C. Pryor, Y. Shen, H. Jin, L. Getoor, and X. E. Wang. Esc: Exploration with
soft commonsense constraints for zero-shot object navigation. In International Conference on
Machine Learning (ICML), 2023.