Graph-ic Improvements: Adding Explicit Syntactic Graphs to Neural Machine Translation

Tracking #: 817-1809

Flag : Review Assignment Stage

Authors: 

Yuqian Dai
Serge Sharoff
Marc de Kamps

Responsible editor: 

Luis Lamb

Submission Type: 

Regular Paper

Full PDF Version: 

Supplementary Files: 

Cover Letter: 

Dear Editor-in-Chief, We are pleased to submit the revised version of our manuscript titled "Graph-ic Improvements: Adding Explicit Syntactic Graphs to Neural Machine Translation" (RESUBMIT of #734-1718) in response to the valuable comments and suggestions provided by the reviewers. We have carefully addressed each of the points raised and have made substantial improvements to enhance the clarity, coherence, and robustness of our work. Below, we provide a detailed account of the changes made: 1. Clarified Figure and Table Captions: We have revised all figure and table captions to clearly specify the data sources and the information presented. This ensures that readers can easily understand the context and significance of each visual element. 2. Added Explanations for Syntactic Labels: To aid in the interpretation of our results, we have added detailed explanations for all syntactic labels used in the paper. This provides a comprehensive understanding of the syntactic structures and their relevance to our study. 3. Provided Legends for Parser Information: We have included legends that explain the information provided by the parser and the syntactic information represented in the graph structures. This enhances the readability and interpretability of the figures. 4. Clarified Evaluation Metrics: The roles and usage of BLEU, COMET, and Transquest metrics have been explicitly defined and explained in the experimental section. This clarifies how these metrics contribute to the evaluation of our model's performance. 5. Explained Dataset Roles: We have added a detailed explanation of the roles of the training, validation, and test sets in our experiments. This helps readers understand the rationale behind our dataset splits and the importance of each set in the model training and evaluation process. 6. Rewrote Section Introductions: The introductions of each section have been rewritten to be more concise and clear, ensuring that readers can easily follow the flow of the paper and understand the key points being discussed. 7. Revised Experimental Conclusions: The conclusions drawn from our experiments have been revised to be more explicit and clear, making it easier for readers to grasp the implications of our findings. 8. Defined Syntactic Knowledge: We have provided a clear definition of syntactic knowledge and its role in our research, which helps in grounding the theoretical framework of our study. 9. Enhanced Figure Details: The details in the figures have been refined to improve clarity and readability, ensuring that the visual elements effectively communicate the intended information. 10. Added New Experiment: A new experiment has been conducted using a parser with error types, and the results have been integrated into the paper. This adds depth to our analysis and demonstrates the robustness of our proposed model. 11. Explained Model Performance: We have provided a detailed explanation of why the SGBC method performs better in Table 1, offering insights into the factors contributing to its superior performance. 12. Revised Hypothesis Statements: The null hypothesis (H_0) and alternative hypothesis (H_1) have been revised for clarity and precision, ensuring that the statistical testing is well-defined and understandable. 13. Provided BLEU Score Range: The range of BLEU scores has been specified to give readers a clear understanding of the scale and significance of the metric. 14. Corrected Typos: All identified typographical errors have been corrected to improve the overall quality and professionalism of the manuscript. 15. Restructured Introduction: The introduction has been restructured to first explain the problem of syntactic ambiguity in Neural Machine Translation (NMT), followed by an introduction to neural-symbolic AI and how it integrates neural models (e.g., BERT) with symbolic reasoning (e.g., syntactic graphs). This provides a coherent and logical flow of ideas. 16. Added New Citations: We have added several new citations to support the new content and to strengthen the theoretical and empirical foundations of our work. We believe that these revisions significantly enhance the quality and clarity of our manuscript. We are confident that the changes address the reviewers' concerns and improve the overall contribution of our work to the field. We appreciate your consideration and look forward to any further feedback. Thank you for your attention to this matter. Sincerely, Yuqian

Previous Version: 

Tags: 

  • Under Review