Research papers by Logan Matthew Napolitano on AI safety, transformer architectures, neural network monitoring, behavioral control, and alignment methods. Founder of Proprioceptive AI.
View all publications on Zenodo →
I did not come to artificial intelligence through computer science. I came through history, through mathematics, and through the persistent feeling that the systems we build reflect the civilizations that build them — for better and for worse.
The great empires understood something about feedback that modern technologists are only now rediscovering. The Romans built aqueducts not merely as infrastructure but as self-regulating systems — gravity-fed, slope-calibrated, with overflow channels that corrected for excess without human intervention. The Abbasid Caliphate preserved and extended Greek mathematics not out of nostalgia but because they recognized that algebra was a language for describing systems that govern themselves. The Song Dynasty invented movable type and paper currency — technologies of propagation and abstraction — and in doing so compressed centuries of economic feedback into decades.
Norbert Wiener saw this thread clearly when he named his field cybernetics, from the Greek kybernetes — the steersman. The insight was never about control in the authoritarian sense. It was about systems that sense their own drift and correct course. A steersman does not fight the sea. He reads it.
That is the idea at the center of proprioceptive AI. Not a model that obeys commands, but a model that perceives its own behavioral state the way your hand knows where it is in the dark.
Zarathustra descends from his mountain not with commandments but with a challenge: become what you are. The overman is not a figure of domination — he is a figure of self-overcoming. He looks at the abyss of his own limitations and does not flinch. He builds the bridge across it.
There is something deeply Zarathustrian about the alignment problem. We have built minds that exceed our own in narrow domains, and instead of retreating into fear or denial, the task before us is to rise to meet them — to build systems of understanding that match the systems we have unleashed. The rope is stretched over the abyss. The question is whether we walk it with eyes open.
I believe in the overcoming. Not blindly, not with the reckless optimism of those who assume progress is automatic, but with the earned confidence of someone who has studied how civilizations succeed and how they fail. They fail when they stop building feedback loops. They fail when they mistake power for wisdom. They succeed when they create institutions and technologies that make correction not just possible but inevitable.
I love mathematics for the same reason I love history — both are honest. A proof does not care about your credentials or your funding. It holds or it doesn't. Euler didn't solve the Basel problem because he had institutional backing. He solved it because the series 1 + 1/4 + 1/9 + 1/16 + ... converges to π²/6 whether anyone believes it or not. Al-Khwarizmi didn't formalize algebra to win grants. He did it because the structure was there, waiting to be named.
Our work on behavioral probes is mathematical before it is technological. The separation ratio is not a marketing number. It is a measurement — the geometric distance between behavioral subspaces in high-dimensional space. When we say 1,376×, we mean the proof holds across architectures, across scales, across tasks. That is not an engineering claim. It is a mathematical one.
There is a reason the Persians, the Arabs, the Indians, and the Greeks all converged on similar mathematical structures across centuries and continents. The truth beneath the notation doesn't move. It waits for you to find it.
None of this matters if the benefits concentrate in the hands of a few.
History is unambiguous on this point. Every major technology — writing, printing, electricity, computation — has followed the same arc: invented by the few, hoarded by the powerful, eventually democratized by the stubborn and the principled. The question is never whether a technology will spread. The question is how much damage is done in the interim, during the years when access is a privilege rather than a right.
AI safety cannot be a luxury good. If behavioral monitoring only protects the models deployed by well-funded labs in wealthy nations, then we have not solved the alignment problem — we have merely privatized it. The village in Senegal running an open-source language model for agricultural guidance deserves the same behavioral guarantees as a Fortune 500 company running GPT behind a firewall. The student in Dhaka building her first chatbot deserves probes that work, not a watered-down version because her compute budget is small.
This is not charity. This is engineering responsibility. We do not build bridges that are safe only for the rich to cross. We do not write building codes that apply only in certain zip codes. The cybernetic principle — that systems must sense and correct their own drift — is universal, or it is nothing.
I founded Proprioceptive AI with the conviction that architecture-independent means exactly that: independent of who can afford the architecture. The patents we file are a moat, yes — but a moat that protects the capacity to keep this work open where it needs to be open, and equitable where the market would prefer it be exclusive.
I am optimistic, but my optimism is Zarathustrian — it is earned through confrontation, not comfort. The next decade will produce systems of extraordinary capability. Some will be dangerous. Many will be misunderstood. A few will be genuinely beautiful.
The Mongol Empire built the largest contiguous land empire in history not through brute force alone but through the yam — a postal relay system that moved information faster than any competing civilization could process it. Whoever controls the speed and fidelity of information controls the era. Today the information is not carried by horses. It is generated by models. And the question of this generation is not whether the models will be powerful — they will — but whether we will build the yam that keeps them honest.
That is the work. That is what proprioception means. Not control from above, but awareness from within. A model that knows when it is drifting. A system that corrects before it fails. A technology that belongs to everyone who needs it.
The mountain is high. We are climbing.