By Anonymous User
Review Details
Reviewer has chosen to be Anonymous
Overall Impression: Average
Content:
Technical Quality of the paper: Average
Originality of the paper: Yes, but limited
Adequacy of the bibliography: Yes, but see detailed comments
Presentation:
Adequacy of the abstract: Yes
Introduction: background and motivation: Good
Organization of the paper: Satisfactory
Level of English: Satisfactory
Overall presentation: Good
Detailed Comments:
In this contribution, the author proposes to use computational cognitive architectures as a basis for neuro-symbolic systems, tracing back contemporary work to its origins in research done decades ago, when neuro-symbolic models were mostly referred to as a connectionist-symbolic models. A historic perspective on neuro-symbolic AI is helpful in contextualizing most recent advances, and in setting the stage for future publications in this journal.
A few points, though, would require some additional attention by the author.
Minor revisions:
- There's no mention of the Standard Model of the Mind*, a fairly recent - and relevant - proposal that falls under the scope of this paper. It'd be beneficial to broaden the set of examples of "Computational Cognitive Architectures" provided in section 4 with such framework.
- p.2, mid-section: can the author make a couple of examples of "strong advocates" from both symbolic and deep-learning/neural community, who have started to see the value of neuro-symbolic AI (e.g., Yann LeCun in his 2022 white paper on "path towards autonomous machine intelligence")?
- p.4, final paragrah: can the author clarify what he means by saying that part of the reason why two levels exist, is that "nature" has designed them as distinct so that they can work synergistically? Is the intention behind this argument to echo evolutionary theories? What would the other part of the reason be? This is admittedly not essential for the paper, but it has some interesting ramifications: primary emotions would be part of System 1 because they are functional to human's basic responses to stimuli from the environment (e.g, fast response to threats). Primary emotions would still be relevant, but only synergistically, in System 2 (e.g., according to Damasio's account of the role of emotions in rational decision-making).
- Section 5: there seems to be an unresolved "tension" between ontology and epistemology when the notion of "level" is discussed. Clearly, cognitive-psychlogical realism implies that the "mechanisms" and the "symbols" modeled by computational cognitive architectures exist in the human mind, which would justify to design AI systems that reflect such characterisitcs. But, what exactly these representations are and those mechanisms do, in synergy, is the object of different theories, as also argued in the paper. So, would the author agree with the view that two separate levels of information processing exist in humans (ontology), although their fine-grained constitutents (representations and mechanisms) and how they work are still under investigation (epistemology)? If so - and even if not so - it would help to explicitly address such tension, and/or clarify the ambiguity.
- P.8: The author writes that "some issues involved in dual-process theories are more complex than often assumed", and thus - paraphrasing - we'd need fine-grained understanding through an overarching framework and computational simulations: is the author proposing here to extend an existing cognitive architecture, e.g., Clarion, or to develop a new one? Related to this point - see the request for including a mention of the Standard Model of the Mind.
Additional bibliographic entry + mention in the paper:
*Laird, J. E.; Lebiere, C.; and Rosenbloom, P. S. 2017. A standard model of the mind: Toward a common computational framework across artificial intelligence, cognitive sci-
ence, neuroscience, and robotics. Ai Magazine, 38(4): 13–26
Overall, this is a potentially good paper to introduce how computational cognitive architectures can be used as frameworks for neuro-symbolic AI models.