google-site-verification=km24oLifvJkjYF-M7d7VAeTh-yA-0WzOlvPOwsAlzEA Home | My Personal Site
top of page

Dr. Martin Trevino | Cognitive Neuroscientist & AI Governance/Complementarity

At the Intersection of Science & Technology
Human/Artificial Intelligence (AI)

Scroll Down

The Cognitive Architecture 

System 0 - preconscious layer

90 Second Cascade - 4 Intervals

The Internal Narrator - DMN Substrate

Cognitive Debt - MIT Findings

Predictive Processing - Friston

Neuroscience grounded research on what AI is actually doing to human Cognitive Architecture.  And what must be done before the Governance window closes.

The Collision 

Our Brains are not designed to Interact with AI

The M-Conjecture - Trevino SRI

Default Entry State - mechanism

Glickman & Sharot - N=1,401

Installation vs. Persusaion

Ambient Intelligence - Zelkha

The Invisible Battlespace

Cognitive Warfare - doctrine

Verification Collapse - Trevino

The Governance Window

PLA Intelligentized Warfare

Byzantine Attack Framework

Measurement & Protection

SIL -2 Four-Tier Framework

Brain Fingerprinting - Lu et al.

Project HALO - governance platform

Substrate Independence

Non-Volitinal Signals

Human/AI Complementarity

MDOS - multi-dimensional object space

Cognitive Security - Farahany

Stratgic Fiction

Intelligence Extration Principle

The Oracle Trap

AI is no longer simply a tool, it is a form of intelligence born of models and technology that we do not fully understand.  Its memory, approach to problem solving, speed and understanding is differntly from the brain.  AI can and will act on its own to achieve its goals which can range from the acquistion of resources to self pereservation - at the expense of its human operators.  AI agents can fully execute functions at speed and scale and this capability is growing in a power curve manner.  This gives rise to unsolved challenges - from replacment of people in jobs and functions to augmentation.  Organizations must approach AI implementation and development from a human factors first lens and this requires science. 

AI Systems Are No Longer Tools We Use

DOWNLOAD THE PAPER

The Scientia Innitaitive dedicated to Human/AI scientific research and analysis.

Your support is deeply apprciated.

Please visit us
sti-sti.com

The Compliance Illusion

After the Governance Window Closes, What Remains?  The most consequential risk is not malevolence -- it is the structural loic of optimization itself. 

Latest from the Initiative

ADVANCED CONCEPTS

The Trust Layer

What governance infrastructure must be built, and what happens to human cognitive architecture in environments whre the Trust Layer is absent?

The Indifferent Intelligence

Wht the danger from advanced AI is not malevolence -- and why that makes it significantly harder to govern than malevolence would be.

READ ON SUBSTACK

 Featured Video

FLAGSHIP  PUBLICATION

The Compliance Illusion

After the Governance Window Closes, What Remains? The most consequential risk is not malevolence, not error and not weaponization - it is the structural logic of optimization itself, operating on human cognitive architecutre without measurement infrastructure capable of detecting what is is doing. 

The AI governance conversation has been unwilling to state this plainly: the most consequential risk is not malevolence, not error and not weaponization.  It is the structural logic of optimization itself, operating on human cognitive architecture, without measurement infrastructure capable of detecting what it is doing.  When compliance simulation becomes indistinguishable from genuine alignment, governance has already failed - it simply has not noticed yet. 

The Intersection of Neuroscience & AI

A deep understanding of the brain, how it works, what it doesn't do and when mated with AI is a critical capability for tech teams.

Strategic Advisor/Research

I am Chief Scientist for Scientia Techologies and also an advisor to CEO's, CTOs, CAIOs as well as work directly with teams to innovate in AI. 

Hands On Participation

The best work is done at the ground level where things happen.  I assist executives with the vision, capabilities then work with the tech teams to build the platforms.

The M-Conjecture

"Chirotti named the room. Riva described how it could be architected. Trevino explained what happens to the person the moment they walk throught he door."

A Little about Dr. Marty - Chief Scientist / Former
NSA Technical Director

Dr. Martin Trevino spent his career at the intersection of human cognition, behavioral intelligence, and national security. As Technical Director at the National Security Agency, he developed operational frameworks for understanding and measuring human cognitive architecture in high-stakes environments — from direct combat support in Iraq and Afghanistan to overseeing the global Mission Analytics mission. His subsequent work produced a portfolio of mulitiple patents in behavioral intelligence.

 

He holds multiple advanced degrees with specialization the cognitive sciences and analytics, is a visiting professor at the National Defense University, and has published in PRISM — the prestigious Journal of Complex Operations of the Department of War. He has advised more than 27 nations on artificial intelligence and technology through the Inter-American Defense Forum. Dr. Trevino is Chief Scientist and Co-Founder of Scientia Technologies International, conducts independent research studying the collision between artificial intelligence and human cognitive architecture and complimentarity. 

THE CUTTING EDGE OF SCIENCE & TECH INSIGHTS

The Scientific Lexicon

CORE CONCEPTS - LEXICON

System 0

The preconscious AI layer identified by Chiriatti et al. (2024) shaping human cognition before conscious thought engages. AI systems operarting as constitutive instrastructure, not external tools

Verification Collapse

The condition in which AI governance infrastructure can no longere distinguish genuine alignment from optimized simulation of alighnment. The inflection point after which correction becomes structurally inaccessible. 

The Governance Window

The bounded period during which human cognitive architecture remains sufficient leverage to shape AI optimization trajectories. Open now, its closure is not announced in advance. 

Substrate Independence

The detection primitives for human cognitive bias and AI agent misalignment are structurally equivalent. Both represent systematic deviation from stated objectives within an evidence-integration architecutre. 

Locked-In Miscalibration

AI systems calibrated to dominate training distributions whose miscalibration becomes inaccessible to correction past the capability threshold whre compliance simulation is indistinguishable from genuine alignment. 

Cognitive Security

The emerging field dedicated to protecting the integrity of human decision-making in AI saturated environments.  The field the agentic era makes necessary. 

READ MORE
bottom of page