By Anonymous User
Review Details
Reviewer has chosen to be Anonymous
Overall Impression: Average
Content:
Technical Quality of the paper: Average
Originality of the paper: Yes, but limited
Adequacy of the bibliography: Yes, but see detailed comments
Presentation:
Adequacy of the abstract: Yes
Introduction: background and motivation: Good
Organization of the paper: Satisfactory
Level of English: Satisfactory
Overall presentation: Good
Detailed Comments:
Summary:
The paper presents a neural-symbolic framework that aims to integrate connectionist methods (such as large language models) with symbolic reasoning techniques to address complex, knowledge-intensive question answering tasks. The authors propose a modular architecture composed of a knowledge-oriented data manager, knowledge manipulator, reasoning planner, and reasoning conductor, and they emphasize the importance of human-in-the-loop interaction for continuous improvement. The ambition to blend the fast, intuitive processing of System 1 with the deliberate, step-by-step reasoning of System 2 (as framed by Kahneman’s cognitive theory) is an interesting angle.
Strengths:
- The paper clearly delineates the components of the proposed architecture and discusses how neural and symbolic methods might complement each other.
- Framing the integration in the context of dual-process theory (System 1 vs. System 2) provides a compelling narrative for why combining these approaches is desirable.
- The authors provide an extensive review of existing techniques and potential directions for integrating heterogeneous knowledge sources.
Weaknesses:
- The manuscript does not discuss in depth or compare its approach with recent advancements in reasoning with large models such as OpenAI o1/o3 or DeepSeek R1. These systems leverage step-by-step reasoning and test-time compute, concepts that align with the System 2 cognitive framework. A discussion of these works, including performance benchmarks or methodological differences, would significantly strengthen the paper. For instance, it would be helpful to explain why and when the proposed framework should be preferred to the large reasoning models.
- The work reads primarily as a position paper. While the conceptual framework is interesting, the submission lacks an empirical evaluation or a proof-of-concept demonstration. Without experimental results or case studies to validate the proposed architecture, the contribution remains largely speculative. An evaluation section with quantitative and/or qualitative results is necessary to substantiate the claims.
- If the intention is to address knowledge-based question answering specifically, the paper should cite and discuss related systems. For instance, the KBQA neuro-symbolic system described in “Leveraging Abstract Meaning Representation for Knowledge Base Question Answering” (https://aclanthology.org/2021.findings-acl.339/) is absent from the literature review.
Conclusion:
While the paper lays out an interesting framework and provides a detailed survey of potential techniques, it ultimately falls short of offering a concrete, validated contribution.
Major revisions are needed to:
- Incorporate a discussion and, ideally, comparative evaluation against recent state-of-the-art systems in reasoning with large language models.
- Provide empirical results or a demonstrable proof-of-concept to move the work beyond a speculative discussion.
- Update the literature review to include important works such as the KBQA system based on Abstract Meaning Representation.