Differentiable Logic Synthesis: Spectral Coefficient Selection via Sinkhorn-Constrained Composition

Tracking #: 938-1961

Flag : Review Assignment Stage

Authors: 

Gorgi Pavlov

Responsible editor: 

Guest Editors X-NeSy

Submission Type: 

Article in Special Issue (note in cover letter)

Full PDF Version: 

Cover Letter: 

Dear Editors, I am writing to submit my manuscript, "Differentiable Logic Synthesis: Spectral Coefficient Selection via Sinkhorn-Constrained Composition," for consideration in the Special Issue on Explainable Neurosymbolic AI (X-NeSy) of Neurosymbolic Artificial Intelligence. Please note that this submission is intended for the Special Issue on X-NeSy. The paper presents a transparent-by-design architecture for learning Boolean logic via gradient descent, grounded in Boolean Fourier analysis. The central idea is that spectral coefficients serve as intrinsically interpretable features -- each coefficient has an explicit mathematical meaning (the correlation between a function and a parity character) -- eliminating the need for post-hoc explanation methods. The architecture composes these coefficients via Sinkhorn-constrained routing on the Birkhoff polytope, extended with column-sign modulation for Boolean negation. I want to be transparent about my background: I am not a practitioner in the neurosymbolic AI or XAI communities. This work began as an exploratory investigation -- I was asking myself whether classical Boolean Fourier analysis could be combined with differentiable routing to produce logic that is both learned end-to-end and fully transparent. One question led to another: Could gradient descent discover the right spectral coefficients? Could they be constrained to ternary values without losing accuracy? Could symbolic knowledge (symmetry, degree bounds) be injected as hard constraints to improve learning? The answers turned out to be consistently positive, and the results seemed sufficiently interesting to formalize. Key contributions relevant to the X-NeSy special issue include: - Transparent-by-design representations: learned Fourier coefficients are human-readable ternary values {-1, 0, +1} with well-defined semantics - Symbolic knowledge integration: oracle learning experiments at n=16 demonstrate that encoding structural properties as spectral constraints yields +38% accuracy gains over generic methods (p < 0.001) - Operator-theoretic interpretability: spectral gap, entropy, and margin metrics derived from SVD of routing matrices provide formally grounded transparency measures - Universal ternary representability: proved exhaustively through n=4 (all 65,536 Boolean functions), with NPN equivalence class analysis I hope that this exploratory work, coming from outside the established community, offers a fresh perspective that may be valuable to X-NeSy researchers. All code and data are publicly available at https://github.com/gogipav14/spectral-llm for evaluation and replication. Thank you for your consideration. Sincerely, Gorgi Pavlov, Ph.D. Lehigh University & Johnson & Johnson gorgipavlov@gmail.com

Tags: 

  • Under Review