Research papers by Logan Matthew Napolitano on AI safety, transformer architectures,
neural network monitoring, behavioral control, and alignment methods.
Founder of Proprioceptive AI.
Predictive Behavioral Detection in Frozen Language Model Hidden States: Evidence for Pre-Surface Behavioral Encoding Logan Matthew Napolitano (2026) Preprint · DOI: 10.5281/zenodo.18701152
Proprioceptive AI: Self-Compounding Behavioral Probes for Autonomous Model Improvement and Probe-Guided Compression Logan Matthew Napolitano (2026) Preprint · DOI: 10.5281/zenodo.18523713
Unified Behavioral Modulation in Large Language Models: Cross-Architecture Validation of Geometric Behavioral Subspaces Logan Matthew Napolitano (2026) Preprint · DOI: 10.5281/zenodo.18471775
Decode-Time Behavioral Control for Language Models via Per-Token Risk Prediction Logan Matthew Napolitano (2026) Preprint · DOI: 10.5281/zenodo.18367221
A Symbolic Control Runtime for Consistency-Aware Reasoning with Transformer Backends Logan Matthew Napolitano (2026) Preprint · DOI: 10.5281/zenodo.18254824
Reducing Self-Referential Gaming in Consistency-Aware Transformers by Grounding Control Signals in External Task Outcomes Logan Matthew Napolitano (2026) Preprint · DOI: 10.5281/zenodo.18249601
The Holonomy Transformer: A Geometrically-Native Neural Architecture for Consistent Reasoning Logan Matthew Napolitano (2026) Preprint · DOI: 10.5281/zenodo.18247940