By Anonymous User
Review Details
Reviewer has chosen to be Anonymous
Overall Impression: Good
Content:
Technical Quality of the paper: Good
Originality of the paper: Yes, but limited
Adequacy of the bibliography: Yes
Presentation:
Adequacy of the abstract: Yes
Introduction: background and motivation: Limited
Organization of the paper: Needs improvement
Level of English: Satisfactory
Overall presentation: Good
Detailed Comments:
The paper proposes NeSy-EBM, a unifying framework intended to organize NeSy systems. It categorizes systems into three paradigms (deep symbolic variables, parameters, and potentials). Then, the paper introduces NeuPSL as a framework to model a family of NeSy systems, together with different learning techniques.
While the paper has significant technical contributions regarding NeuPSL, its learning algorithms and experimental evaluation (sections 4, 5, 6), the initial presentation of the "unifying framework" (section 3) suffers from confusing notations and unclear contributions. I would suggest condensing this theoretical preamble, which doesn't bring much, and focus on the meat of the paper.
In particular:
1. On notation heaviness: Many definitions could be simplified.
1. The distinction between $x_{nn}$, $g_{nn}$, $x_{sy}$, $g_{sy}$ and their respective parameters is clear.
2. Then, the energy function $E$, symbolic function $g_{sy}$, and potential functions $\psi$ seem to all be different layers of abstraction of the same concept. Equation 3 explicitly defines $E$ as mapping directly to $g_{sy}$. The potential is also mostly defined directly by $g_{sy}$. These abstractions don't bring much to the reader's understanding.
3. DSVar and DSPar explicitly groups some arguments differently: $\psi([y,x_{sy},g_{nn}], w_{sy})$ vs $\psi([y, x_{sy}], [w_{sy}, g_{nn}])$. However, mathematically, this is just syntactic sugar. This seems to be for *conceptual* purposes, but it's not clear why it matters in a mathematical framing.
2. On the value of the mathematical framework: it is not clear what understanding it brings to the different existing NeSy frameworks.
1. In Appendix C, the paper maps three important NeSy frameworks (Semantic Loss, DeepProbLog, Logic Tensor Networks) to their framework. However, mapping these systems to the given framework essentially reduces them to a single equation for $g_{sy}$ (e.g., Equation 73 for Semantic Loss). The potential functions etc. do not help clarify any of the frameworks' internal mechanics. So I would simplify and condense much of the definitions of Section 3, focusing on $x_{nn}$, $g_{nn}$, $x_{sy}$, $g_{sy}$ .
2. The intuitive categorization of systems into the ones modifying neural predictions (DSPar) or not (DSVar) as detailed in Figure 2, is intuitive but is to nuance. Since a lot of frameworks in DSVar also modify predictions via gradient descent, wouldn't that give both the same purpose? Here I also have a question: how come Semantic Loss and LTN are categorized differently (DSPar and DSVar respectively)? They both define a differentiable loss function and act on the neural predictions in a similar same way (backpropagation)?
3. Technical contributions: Sections 4 and 5 on NeuPSL and Learning are in contrast very clear with significant contributions.
1. I would focus on these, perhaps highlighting more strongly what are the new contributions compared to the original NeuPSL paper.
2. The experimental evaluation in Section 6 is also thorough.
The paper has high-quality algorithmic work, but is prefaced by an overly complex theoretical framing with a unifying focus which, in my opinion, does not help much in the reader's understanding. I recommend streamlining Section 3 significantly and pivoting Sections 4 and 5 as the primary contributions.