Call for Papers: Special Issue on Trustworthy Neurosymbolic AI

Call for Papers: Special Issue on Trustworthy Neurosymbolic AI

Deep Learning has revolutionized the landscape of Artificial Intelligence (AI) in the fields of Natural Language Processing, Computer Vision, Decision Support, and many more. Simultaneously, it has drawn stark attention to the trustworthiness of these systems, including generative Large Language Models which are known to hallucinate, especially when attempting to explain their answers. These characteristics of generative AI (including DeepFakes in the fields of computer vision and audio) have led to various challenges related to the trustworthiness, reliability, and ethical aspects. Digital surveillance cameras and web scrapers collect massive amounts of data for use in AI training, all without knowledge or consent. AI assisting in human decision-making (or even AI making autonomous decisions) in safety-critical tasks like driving can cause harm without being able to identify or fix problems. One possible solution to improving the reliability, robustness, and trustworthiness of AI is a neurosymbolic system that can bridge the gap between symbolic and neural approaches to AI. Neurosymbolic AI can also be used to improve integration by managing bias, refining data quality, aligning AI with human values, and providing human-compatible explanations for AI-generated predictions.Our aim with this issue is twofold: (i) to showcase new and novel approaches to building trustworthy AI systems, and (ii) to critically evaluate trustworthiness through the lens of neurosymbolic approaches.
The goal of this special issue is to bring together a diverse set of ideas, implementations, and evaluations allowing us to make progress toward trustworthy neurosymbolic systems. We welcome submissions that showcase neurosymbolic contributions in a broad set of applications that allow users, stakeholders, and system designers to place their trust in AI systems, contributing to methods, approaches, benchmarks, frameworks, metrics, and novel tools and technologies.

Topics of Interest

We welcome original high-quality submissions on (but not restricted to) the following topics:

  • Secure, trustworthy, robust, resilient AI
  • Identification and prevention of fake news and misinformation (increasing trustworthiness of information)
  • Provenance, trust, and metadata for authoritative sources (including e.g., authenticity and integrity)
  • Identity and identifiers
  • Measuring and improving data quality (e.g., reducing data manipulation/poisoning attack)
  • Managing bias and ensuring fairness
  • Enabling transparency, explainability, and accountability/auditability
  • Trustworthy autonomous control, decision support, and generative AI
  • Human-compatible explainable AI for trustworthiness and robustness
  • Information flow control and accountability
  • Privacy-preserving AI
  • Law, governance, and legal/compliance issues
  • Trust, accountability, and autonomy in knowledge-based AI for self-determination (user agency)


July 30th, 2024. Earlier submissions will be processed as they come in.

Guest Editors:

  • Mehwish Alam, Institut Polytechnique de Paris, France
  • Leilani Gilpin, UC Santa Cruz, Department of Computer Science and Engineering
  • Sabrina Kirrane, Vienna University of Economics and Business, Institute for Information Systems & New Media
  • Eugene Vasserman, Kansas State University, Department of Computer Science

Contact email for the guest editors:

Guest Editorial Board:

(to be completed)

Author Guidelines:

We invite full papers, dataset descriptions, application reports, and reports on tools and systems. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this special issue. Authors canextend previously published conference or workshop papers - see the submission guidelinesat for details.Submissions shall be made through the journal's website at Prospective authors must take notice of the submission guidelines postedat .

Note that you need to request an account on the website to submit a paper. Please indicate in the cover letter that it is for the "Special Issue on Trustworthy Neurosymbolic AI". All manuscripts will be reviewed based on the journal's open and transparent review policy and will be made available online during the review process.