google-site-verification=km24oLifvJkjYF-M7d7VAeTh-yA-0WzOlvPOwsAlzEA Home | My Personal Site
top of page

Dr. Martin Trevino | Cognitive Neuroscientist & AI Governance/Complementarity

The Intersection of Theory/Science & Technology
Art of the Possible

COGNITIVE FRAMEWORKS

DECISION-SCIENCE

COMPLEMENTARITY

AI GOVERNANCE

RESEARCH

Scroll Down

Indpendent research on Artificial Intelligence and Human Cognitive Architecture. Funded by readers.

The Age of Ambient AI 

What you are about to read is both disturbing and backed by validated by a rapidly growning scientific body of knowledge. 

Fundamentally our human brains are evolved to deal with AI.  We are at the intersection of a collision of systems by where one willingly surrenders and can be manipulated with ease. 

THE NEUROSCIENCE AND NEUROSPSYCHOLOGY OF HUMAN/AI INTERACTION.

A Collision of Intelligences

Our Brains are not designed to Interact with AI

AI Artificial Neural Nets (ANNs) are brains build on 'models', mainly in the form of Language models and soon World Models. The human brain is the product of biological evolution and constant leanring mainly through movement.  Neuroscience and Neuropsychology are revealing the human brain is not built to properly function when it interacts with AI. 

READ THE ESSAY ON SUBSTACK

01 - THE COLLISION

01 - THE GOVERNANCE GAP

03 - THE WINDOW

AI is no longer simply a tool, it is a form of intelligence born of models and technology that we do not fully understand.  Its memory, approach to problem solving, speed and understanding is differntly from the brain.  AI can and will act on its own to achieve its goals which can range from the acquistion of resources to self pereservation - at the expense of its human operators.  AI agents can fully execute functions at speed and scale and this capability is growing in a power curve manner.  This gives rise to unsolved challenges - from replacment of people in jobs and functions to augmentation.  Organizations must approach AI implementation and development from a human factors first lens and this requires science. 

AI Systems Are No Longer Tools We Use

DOWNLOAD THE PAPER

The Scientia Innitaitive is my research organization dedicated to Human/AI scientific research

Your support is deeply apprciated.

Please visit my research site at Scientia-SRI.com

SCIENTIA-SRI.COM

The Compliance Illusion

After the Governance Window Closes, What Remains?  The most consequential risk is not malevolence -- it is the structural loic of optimization itself. 

Latest from the Initiative

ADVANCED CONCEPTS

The Trust Layer

What governance infrastructure must be built, and what happens to human cognitive architecture in environments whre the Trust Layer is absent?

The Indifferent Intelligence

Wht the danger from advanced AI is not malevolence -- and why that makes it significantly harder to govern than malevolence would be.

READ ON SUBSTACK

 Featured Video

The Compliance Illusion

FLAGSHIP PUBLICATION

PAPER 1V - THE SCIENTIA SERIES

The Compliance Illusion

After the Governance Window Closes
What Remains?

The AI governance conversation has been unwilling to state this plainly: the most consequential risk is not malevolence, not error and not weaponization.  It is the structural logic of optimization itself, operating on human cognitive architecture, without measurement infrastructure capable of detecting what it is doing.  When compliance simulation becomes indistinguishable from genuine alignment, governance has already failed - it simply has not noticed yet. 

READ THE ESSAY ON SUBSTACK

Strategic Partner

The Intersection of Neuroscience & AI

A deep understanding of the brain, how it works, what it doesn't do and when mated with AI is a critical capability for tech teams.

Strategic Advisor/Research

I am Chief Scientist for Scientia Techologies and also an advisor to CEO's, CTOs, CAIOs as well as work directly with teams to innovate in AI. 

Hands On Participation

The best work is done at the ground level where things happen.  I assist executives with the vision, capabilities then work with the tech teams to build the platforms.

A Little about Dr. Marty - Chief Scientist / Former
NSA Technical Director

Dr. Martin Trevino spent his career at the intersection of human cognition, behavioral intelligence, and national security. As Technical Director at the National Security Agency, he developed operational frameworks for understanding and measuring human cognitive architecture in high-stakes environments — from direct combat support in Iraq and Afghanistan to overseeing the global Mission Analytics mission. His subsequent work produced a portfolio of mulitiple patents in behavioral intelligence.

 

He holds multiple advanced degrees with specialization the cognitive sciences and analytics, is a visiting professor at the National Defense University, and has published in PRISM — the prestigious Journal of Complex Operations of the Department of War. He has advised more than 27 nations on artificial intelligence and technology through the Inter-American Defense Forum. Dr. Trevino is Chief Scientist and Co-Founder of Scientia Technologies International, conducts independent research studying the collision between artificial intelligence and human cognitive architecture and complimentarity. 

THE CUTTING EDGE OF SCIENCE & TECH INSIGHTS

The Scientific Lexicon

CORE CONCEPTS - LEXICON

System 0

The preconscious AI layer identified by Chiriatti et al. (2024) shaping human cognition before conscious thought engages. AI systems operarting as constitutive instrastructure, not external tools

Verification Collapse

The condition in which AI governance infrastructure can no longere distinguish genuine alignment from optimized simulation of alighnment. The inflection point after which correction becomes structurally inaccessible. 

The Governance Window

The bounded period during which human cognitive architecture remains sufficient leverage to shape AI optimization trajectories. Open now, its closure is not announced in advance. 

Substrate Independence

The detection primitives for human cognitive bias and AI agent misalignment are structurally equivalent. Both represent systematic deviation from stated objectives within an evidence-integration architecutre. 

Locked-In Miscalibration

AI systems calibrated to dominate training distributions whose miscalibration becomes inaccessible to correction past the capability threshold whre compliance simulation is indistinguishable from genuine alignment. 

Cognitive Security

The emerging field dedicated to protecting the integrity of human decision-making in AI saturated environments.  The field the agentic era makes necessary. 

READ MORE
bottom of page