By Kuniko Paxton
Review Details
Reviewer has chosen not to be Anonymous
Overall Impression: Good
Content:
Technical Quality of the paper: Good
Originality of the paper: Yes, but limited
Adequacy of the bibliography: Yes, but see detailed comments
Presentation:
Adequacy of the abstract: Yes
Introduction: background and motivation: Good
Organization of the paper: Needs improvement
Level of English: Satisfactory
Overall presentation: Good
Detailed Comments:
This research provides a systematic mapping between neurosymbolic architectures and stages of bias mitigation. The topic is timely and a valuable contribution, as the integration of symbolic reasoning into fairness-aware AI has not been sufficiently explored. With improved structure, clearer differentiation from existing surveys, and enhanced presentation of figures and references, the work is publishable. The introduction appears to lack sufficient references and could benefit from clearer phrasing in several parts. The Bias Mitigation and Neurosymbolic Architectures sections show considerable overlap with previous studies, and the differences should be made more explicit. The remaining sections are suggested for minor revisions. The details are as follows:
1. Missing references:
a. Trustworthiness aspects
Are these aspects your own interpretation? Or should they be cited?
b. “Researchers in the field of neurosymbolic AI have proposed numerous architectures that incorporate the understandable, reasonable nature of symbols and statistical models that can handle noise and uncertainty.”
It is expressed as if there are clearly past studies, such as “proposed numerous architectures”, yet no such references are provided.
c. “Most bias mitigation approaches encode constraints directly into the machine learning procedure and thus implement specific fairness notions for a distinct set of use cases.”
Add the case as a reference.
d. “The biggest issue in symbolic systems is the grounding problem, i.e. to find an adequate mapping between the continuous real world and the assumed discrete world of the model”.
This is a well-known issue that has been the subject of past research, so a citation should be included.
Harnad, Stevan. "The symbol grounding problem." Physica D: Nonlinear Phenomena 42.1-3 (1990): 335-346.
2. Ambiguity and suggested rephrasing
a. “Fairness stands out among these concepts, as this is an aspect of trustworthiness that ADM actually promises to enhance compared to human decisions”.
Clarify how fairness differs from the other components of trustworthiness, and in what way ADM systems are assumed to improve fairness relative to human judgment.
b. “While bias detection queries whether data or a prediction satisfies a fairness constraint, bias mitigation employs fairness constraints on the data, the prediction model or the output”.
Does this mean “While bias detection queries whether predictions satisfy a fairness constraint, bias mitigation employs fairness constraints on the data and the models”?
c. “i.e., are tied to one single formal definition of fairness”
What is meant by “one single formal definition”?
3. Clearly state the distinctions from other review studies in the same field. emphasise that this paper maps types of neurosymbolic approaches to bias mitigation techniques. This distinction should be explicitly linked to your listed contributions.
4. Clearly state the ultimate outcome or impact of the listed items in the contribution subsection.
1. Missing reference on “Many wide-spread notions of fairness focus on binary classification tasks with one binary protected attribute”.
For example:
Pagano, Tiago P., et al. "Bias and unfairness in machine learning models: a systematic review on datasets, tools, fairness metrics, and identification and mitigation methods." Big data and cognitive computing 7.1 (2023): 15.
2. Explain “b” in the equation in Group Fairness
1. Although well researched, the presentation is overly enumerative and challenging to follow, with unclear transitions to the subsequent section. The process of bias mitigation has already been reviewed in:
Hort, Max, et al. "Bias mitigation for machine learning classifiers: A comprehensive survey." ACM Journal on Responsible Computing 1.2 (2024): 1-52.
Rather than merely introducing it, please clearly articulate how your perspective differs from or extends it.
1. While new references have been added, the content appears largely identical to the following research:
Kautz, Henry. "The third ai summer: Aaai robert s. engelmore memorial lecture." Ai magazine 43.1 (2022): 105-125
Emphasize the distinct perspective of this manuscript.
1. “Chiappa (2019) proposed a method to adjust the output of a predictor to satisfy counterfactual fairness.” However, this is not listed in the references from Table 1.
2. Adding labels to each object in Figure 2 would be preferable to. The first model from left is SCM, the second is the neural network, and the third is the generated data. This is not explained in Section 5.3 (Neuro:Symbolic→Neuro Bias Mitigation).
3. Figure 3 would also be preferable to add labels to each object.