By Anonymous User
Review Details
Reviewer has chosen to be Anonymous
Overall Impression: Average
Content:
Technical Quality of the paper: Average
Originality of the paper: Yes
Adequacy of the bibliography: Yes
Presentation:
Adequacy of the abstract: Yes
Introduction: background and motivation: Good
Organization of the paper: Needs improvement
Level of English: Satisfactory
Overall presentation: Average
Detailed Comments:
In symbol encoding( p. 3) , "we have verified for d≈100…1000d that unary vectors are generated with a relative precision on the magnitude below 0.3%, while orthogonality is verified with a relative precision below 0.4%, and the noise standard deviation prediction relative precision is below 0.3%." However, I didn’t find the detailed verification for these claims, which are foundational to the hypothesis testing described later in Appendix A and Footnote 5. For example, the explanation of how the difference between the magnitude of the generated vectors and the expected magnitude is very small—less than 0.3%—is not clearly outlined.
Following that, the author discusses partial knowledge encoding, a method for semantically encoding uncertainty knowledge, and how it can be implemented in common data structures. The author has addressed previous comments regarding the enumeration in the DS structure. However, further clarification is needed for the bundling and binding rules mentioned. specifically,
For bundling, the concepts of similarity and relatedness need further explanation. I believe cosine similarity may be implied here, but this should be explicitly stated. Adding a single word could significantly reduce the ambiguity and jargon in this section.
Regarding binding, I have additional questions. The text states (p. 5):
'...enjoys the property that the corresponding unbinding operator Bs1 allows retrieving s2.'
Firstly, I recommend avoiding the use of the word 'enjoys' in academic writing, as it can sound too informal and is not appropriate for a serious journal. Secondly, this phrasing raises questions for the reader: How and why can the operator 'retrieve' s2? The internal mechanism behind this operation, whether mathematical or computational, has not been explained and cannot be sufficiently clarified with the phrase 'can simply enjoy.' More detail is required to clearly articulate how the unbinding operator functions.
Then in symbol indexing and specification. The equation 0≃∥x−x′∥≃(τ−τ′)uk +ν(σ+σ′), up to the first order, with ∣τ−τ′∣<σ+σ < is referenced as being developed in Appendix A. However, it's unclear how this relates to the hypothesis testing of whether two vectors are orthogonal or not. Since τ represents the belief value and σ is the noise deviation, the connection between the equation and the hypothesis test needs further clarification from my view.
On page 15, the newly added benchmark. 'Let’s consider two rather large VSA experiments' but it only has one experiment, which is the KJB dataset. The phrase 'Thanks to the …' should be rephrased to something simpler, like 'A being modeled by B.' A bigger issue is the lack of a descriptive explanation for Table 4. I had to try to understand it from the table caption alone. Specifically, what do you mean by the second block being the macroscopic prediction for the mesoscopic? Why is the macroscopic prediction of the bias and standard deviation not provided for the KJB data sample? I do not think this comparison yields meaningful results unless more explanation is provided since you are comparing the meso to the KJB sample to the macro to meso.
The practical experiment is still very preliminary with a better presentation though,
Overall, I appreciate the authors' efforts in improving the work. They have added some empirical testing on the macro implementation. However, the manuscript still suffers from a lack of clarification on the key technical and mathematical parts.