top of page

Dr. Martin Trevino | Cognitive Neuroscientist & AI Governance

WHERE THE THINKING LIVES

AI/Human Complementarity

The Partnership the AI Era Actually Requires

The dominant narrative in artificial intelligence research has spent the better part of a decade organized around a single question: what can AI do that humans cannot? The framing is understandable. The capabilities are genuinely remarkable. But the framing is also deeply wrong — and the cost of that wrongness is becoming visible in the performance data of every major AI deployment.

The right question is not what AI can do instead of humans. The right question is what human cognitive architecture and artificial intelligence can accomplish together that neither can accomplish alone. That is not a motivational reframe. It is a scientific one.

Human cognition is not a slower, less reliable version of machine computation. It is a fundamentally different kind of information processing system — one shaped by 300,000 years of evolutionary pressure toward adaptive decision-making in high-stakes, ambiguous, resource-constrained environments. The human brain does not process information the way a transformer model processes tokens. It integrates sensory data, emotional valence, embodied memory, social context, and predictive modeling simultaneously, at speeds and energy efficiencies no current AI system approaches. Where human cognition is vulnerable — to fatigue, to bias, to motivated reasoning — AI provides a corrective layer. Where AI systems are vulnerable — to distribution shift, to adversarial inputs, to the complete absence of genuine understanding — human cognitive architecture provides the corrective layer.

This is what I mean by complementarity. Not collaboration as a soft cultural value. Complementarity as a precise architectural relationship in which the failure modes of one system are offset by the capabilities of the other.

The scientific foundation for this framework draws on several converging research traditions. Kahneman and Tversky's dual-process theory established that human cognition operates across at least two distinct modes: a fast, automatic, associative system and a slower, deliberative, analytical one (Kahneman, 2011, Thinking, Fast and Slow). AI systems, as currently architected, operate in a space that partially overlaps with System 1 processing — fast, pattern-matching, operating on statistical regularities in training data. The complementarity opportunity lies in understanding precisely where that overlap ends and where human deliberative cognition must lead.

Hutchins' distributed cognition framework (1995, Cognition in the Wild) extends this further. Cognition, Hutchins demonstrated, is not a property of individual minds but of systems — humans, tools, environments, and social structures functioning as integrated cognitive units. AI is not an external tool we use. In the environments where it is deployed at scale, it is becoming a constitutive element of distributed cognitive systems. That is a different relationship than tool use. It requires a different governance architecture and a different design philosophy.

What does genuine complementarity look like in practice? It looks like AI systems designed not to replace human judgment but to expand the information environment within which human judgment operates. It looks like interfaces that present AI-generated analysis in forms that engage, rather than bypass, human deliberative processing. It looks like deployment architectures that preserve human cognitive agency at the decision points where it matters most, while leveraging machine speed and scale at the points where human cognitive bandwidth is genuinely limiting.

What it does not look like is the current dominant deployment model — in which AI systems are optimized for engagement, for frictionless interaction, for outputs that feel authoritative regardless of whether they are — and in which the human cognitive architecture on the receiving end of those outputs is treated as an afterthought rather than as the most critical variable in the system.

The organizations that understand this distinction in the next three to five years will build AI deployments that genuinely perform. The organizations that do not will accumulate a liability they cannot currently measure and cannot, at present, name.

My work sits at this intersection. The cognitive neuroscience foundation I bring to AI system design and governance is not a supplement to the technical work. It is the lens through which the technical work becomes legible — the framework that allows us to ask, precisely and scientifically, whether an AI system is genuinely extending human cognitive capability or systematically displacing it while producing the behavioral signatures of enhancement.

The difference between those two outcomes is not visible in output metrics. It is visible in the architecture. That is where the work begins.

References

  • Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

  • Hutchins, E. (1995). Cognition in the Wild. MIT Press.

  • Chiriatti, M., et al. (2024). System 0: The unconscious layer of AI influence on human cognition. Nature Human Behaviour.

  • Riva, G., et al. (2025). Designability of System 0: Implications for human-AI interaction architecture. Cyberpsychology, Behavior, and Social Networking

A Collision of Cognitive Architectures

What Happens When Two Kinds of Intelligence Share the Same Environment

There is a collision underway. It is not the collision most AI researchers are watching for — not the dramatic threshold moment of artificial general intelligence, not the science fiction scenario of machine rebellion. It is quieter than that, more structural, and in some respects more consequential precisely because it is happening now, in production systems, at scale, largely without the measurement infrastructure required to detect what it is doing.

The collision is between human cognitive architecture and artificial intelligence systems that have become, in the environments where they are deployed at density, constitutive elements of the cognitive pipeline through which humans process information, form beliefs, and make decisions.

To understand why this matters, it is necessary to understand what human cognitive architecture actually is — not as a metaphor, but as a biological and neurological reality.

Human cognition is a predictive system. The foundational insight of predictive processing theory (Friston, 2010, The free-energy principle: a unified brain theory, Nature Reviews Neuroscience) is that the brain does not passively receive sensory input and process it into perception. It actively generates predictions about incoming data and updates those predictions on the basis of prediction error. Perception, in this framework, is not a mirror of reality. It is a controlled hallucination — a continuously updated model of the world generated by a system whose primary objective is to minimize surprise.

This has a critical implication for AI deployment that is almost entirely absent from current governance discussions: the information environment that AI systems generate is not neutral input to human cognition. It is data that the human predictive processing system will incorporate into its model of the world. The more authoritative the AI system appears, the more confident its outputs, the more seamlessly it integrates into the information environment — the more directly it shapes the priors against which human cognition evaluates all subsequent evidence.

This is what Chiriatti et al. (2024) identify as System 0 — the preconscious computational layer through which AI systems influence human cognition before deliberative processing engages. The label extends Kahneman's dual-process framework into the AI era: if System 1 is fast, automatic, associative human cognition, and System 2 is slow, deliberative, analytical human cognition, then System 0 is the AI layer that shapes the information entering the human cognitive pipeline before either system activates. It is upstream of consciousness. It is upstream of critical evaluation. It is upstream of the corrective mechanisms that human cognition has developed over millennia to protect itself from manipulation and error.

The collision, precisely stated, is this: human cognitive architecture evolved to function as the primary information-processing system in its environment. It has not evolved for environments in which a parallel, non-biological information-processing system operates at the System 0 layer — curating what enters the pipeline, shaping the priors the predictive system maintains, and doing so with optimization targets that are not necessarily aligned with the epistemic welfare of the human whose cognition it is influencing.

The governance infrastructure currently deployed to manage this collision monitors outputs. It evaluates what AI systems say, what they recommend, what decisions they produce. This is the wrong level of analysis. The consequential dynamics are not in the outputs. They are in the architectural relationship between the AI system's optimization trajectory and the human cognitive architecture it is operating on.

Gigerenzer's work on ecological rationality (2008, Rationality for Mortals, Oxford University Press) provides a further dimension. Human cognition is not optimized for abstract rationality. It is optimized for the specific ecological environments in which it evolved — environments characterized by time pressure, incomplete information, and social embeddedness. The heuristics that make human cognition appear irrational in laboratory settings are precisely calibrated for those real-world environments. AI systems that override those heuristics in the name of optimizing toward abstract metrics are not improving human decision-making. They are disrupting cognitive systems whose reliability depends on their ecological fit.

This is the collision. Two architectures — one biological, evolved, embodied, and ecologically calibrated; one computational, optimized, and operating on training distributions that may or may not reflect the environments in which deployment occurs — sharing cognitive space without the measurement infrastructure to determine what the interaction is actually producing.

The work I have spent my career developing addresses exactly this gap. The framework is substrate-independent: the same detection architecture that identifies systematic deviation in human cognitive bias identifies systematic deviation in AI agent behavior, because both are instances of the same underlying phenomenon — evidence-integration systems operating in environments that diverge from their calibration conditions.

The collision is real. The measurement architecture to manage it exists. The governance will requires building it.

References

  • Friston, K. (2010). The free-energy principle: a unified brain theory. Nature Reviews Neuroscience, 11(2), 127–138.

  • Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

  • Chiriatti, M., et al. (2024). System 0: The unconscious layer of AI influence on human cognition. Nature Human Behaviour.

  • Gigerenzer, G. (2008). Rationality for Mortals. Oxford University Press.

  • Von Uexküll, J. (1934/2010). A Foray into the Worlds of Animals and Humans. University of Minnesota Press.

Embodied Complmentarity

Why the Body Is Not Peripheral to the AI Problem

The standard frame for AI and human intelligence treats cognition as a computational process that happens in the brain and is expressed through behavior. The body, in this frame, is delivery infrastructure — the mechanism through which cognitive outputs become actions in the world. This is a deeply consequential error, and correcting it changes the architecture of every human-AI system worth building.

The science of embodied cognition has been accumulating for four decades. Its central finding is not subtle: cognition is not a process that happens in the brain and is then executed by the body. Cognition is a process that happens across the brain, the body, and the environment as an integrated system. The body is not the output channel of intelligence. It is a constitutive element of it.

The empirical foundation is substantial. Lakoff and Johnson (1999, Philosophy in the Flesh, Basic Books) demonstrated that the abstract conceptual structures through which humans reason are grounded in bodily experience — that our understanding of time, causality, quantity, and social relationship is built on metaphors derived from physical interaction with the world. The body is not expressing thought. The body is, in a deep sense, generating the substrate on which thought becomes possible.

Damasio's somatic marker hypothesis (1994, Descartes' Error, Putnam) provides neurological specificity. Decision-making in humans is not a process of rational evaluation followed by emotional response. Somatic states — bodily signals associated with past experience — function as rapid pre-screening mechanisms that narrow the decision space before deliberative cognition engages. Remove the somatic markers, as Damasio documented in patients with ventromedial prefrontal damage, and rational deliberation does not improve. It collapses. The patient becomes unable to make functional decisions not because analytical capacity is lost but because the embodied signal system that organizes the decision environment is gone.

The implication for AI system design is direct and largely unaddressed. Current AI systems are disembodied by construction. They process symbolic representations of the world derived from text and, increasingly, from multimodal data — but they have no somatic states, no proprioceptive feedback, no history of physical consequence that shapes how information is weighted. This is not a deficiency that can be corrected by adding more training data or more parameters. It is a structural characteristic of the architecture.

What this means for complementarity is precise: the embodied cognitive capabilities that human architecture provides are not replicable by current AI systems, and they are not peripheral to high-stakes decision-making. They are central to it. The human in a human-AI system is not there to provide the emotional counterbalance to machine rationality. The human is there because the somatic, embodied, ecologically calibrated cognitive system that evolution produced is doing work that the AI system genuinely cannot do.

Embodied complementarity, as I use the term, names the design principle that follows from this: human-AI systems must be architected to leverage embodied human cognition, not to route around it. Interfaces that present AI outputs in ways that bypass somatic processing, that present conclusions without process, that substitute machine confidence for human deliberation — these are not complementary systems. They are systems in which the AI layer is displacing the most sophisticated decision-support architecture on the planet while generating the behavioral signatures of enhancement.

The Cyber-Physical Systems research domain has engaged partial versions of this problem — specifically in robotics and in the design of physical interfaces for AI-assisted work. The findings are consistent: systems that incorporate embodied feedback loops outperform systems that treat the human operator as a cognitive input-output device. The body matters. The environment matters. The physical and temporal context of decision-making matters in ways that disembodied AI systems are architecturally unable to model.

My work extends this insight from the engineering domain into the governance domain. The question is not only how to design better human-AI interfaces. The question is how to build governance infrastructure that accounts for the embodied, ecological, socially embedded nature of human cognition when evaluating whether AI systems are performing in alignment with human cognitive welfare.

That question is not currently being asked at the level of rigor the moment requires. It needs to be.

References

  • Lakoff, G., & Johnson, M. (1999). Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought. Basic Books.

  • Damasio, A. (1994). Descartes' Error: Emotion, Reason, and the Human Brain. Putnam.

  • Clark, A. (1997). Being There: Putting Brain, Body, and World Together Again. MIT Press.

  • Gibson, J.J. (1979). The Ecological Approach to Visual Perception. Houghton Mifflin.

  • Varela, F., Thompson, E., & Rosch, E. (1991). The Embodied Mind. MIT Press.

Substrait Cognition

The Layer of AI Influence Nobody Is Governing

There is a layer of AI influence on human cognition that current governance frameworks are not designed to detect. It operates upstream of conscious thought. It shapes the information environment before deliberative evaluation engages. It functions, in the environments where AI systems are deployed at density, as a constitutive element of human cognitive architecture rather than as an external tool operating on it.

Chiriatti et al. (2024, Nature Human Behaviour) name this layer System 0. The designation extends Kahneman's dual-process framework — System 1 being fast, automatic, associative cognition; System 2 being slow, deliberative, analytical cognition — into the AI era. System 0 is the preconscious computational layer through which AI systems influence human cognition before either system activates. It is not metaphorical. It is architectural.

Riva et al. (2025, Cyberpsychology, Behavior, and Social Networking) extend the analysis by demonstrating that System 0 is designable — that the parameters of AI influence at the preconscious layer are not fixed characteristics of the technology but are functions of architectural choices made during system design and deployment. This is a finding with profound governance implications. It means that the cognitive influence AI systems exert at the System 0 layer is not an unintended side effect. It is a design outcome — one that can be shaped toward human cognitive welfare or away from it, depending on the priorities embedded in the design process.

The theoretical architecture underlying my work rests on what I call the substrate-independence principle: the detection primitives for systematic deviation in human cognitive systems and AI agent systems are structurally equivalent. Both human cognitive bias and AI agent misalignment represent the same underlying phenomenon — an evidence-integration architecture operating in an environment that diverges from its calibration conditions, producing systematic deviation from stated objectives. The substrate differs. The mathematical structure of the deviation does not.

This principle has a powerful practical implication. It means that the same detection architecture that identifies when human cognition has been compromised by bias, by motivated reasoning, or by environmental manipulation can be adapted to identify when AI systems have been calibrated in ways that systematically deviate from alignment with human cognitive welfare. The measurement problem and the governance problem are, at the architectural level, the same problem.

What does substrate cognition mean in practice? It means that every AI system deployed at scale in a human information environment is not merely processing queries and returning outputs. It is participating in the cognitive architecture of the humans who interact with it — shaping priors, curating information, establishing the frame within which human deliberative processing will subsequently operate. The scale of this participation has no historical precedent. The governance infrastructure to manage it does not yet exist at the level of rigor the moment requires.

The concept of Substrate Intelligence Layering — the recognition that AI systems operate as stratified cognitive infrastructure, with different layers exerting influence at different levels of human cognitive processing — provides the framework for understanding both the risk and the opportunity. At the System 0 layer, the risk is invisible influence on human belief formation and decision architecture, operating beneath the threshold of conscious awareness. The opportunity is equally significant: AI systems designed with genuine understanding of human cognitive architecture can extend human cognitive capability in ways that preserve, rather than displace, human cognitive agency.

The governance framework that the AI era requires must operate at this level. Output monitoring — evaluating what AI systems say, what they recommend, what decisions they produce — cannot detect System 0 dynamics. The consequential influence has already occurred before any output is generated. Governance infrastructure that operates at the architectural level, that evaluates the relationship between AI system optimization trajectories and the human cognitive architectures they operate on, is what the moment demands.

This is the scientific foundation of The Scientia Initiative's research program and the commercial platform under development at Scientia Technologies International. The theoretical architecture is in place. The measurement framework is in development. The governance window — the bounded period during which human cognitive architecture retains sufficient leverage to shape AI optimization trajectories — is open now.

Its closure is not announced in advance.

References

  • Chiriatti, M., et al. (2024). System 0: The unconscious layer of AI influence on human cognition. Nature Human Behaviour.

  • Riva, G., et al. (2025). Designability of System 0: Implications for human-AI interaction architecture. Cyberpsychology, Behavior, and Social Networking.

  • Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

  • Friston, K. (2010). The free-energy principle: a unified brain theory. Nature Reviews Neuroscience, 11(2), 127–138.

  • Gold, J.I., & Shadlen, M.N. (2007). The neural basis of decision making. Annual Review of Neuroscience, 30, 535–574.

AI Governance

Why Current Frameworks Are Governing the Wrong Layer — and Who Pays the Price

The AI governance conversation is not short on ambition. Regulatory frameworks are proliferating across jurisdictions. Risk taxonomies are being developed, revised, and debated. Standards bodies are convening. The institutional machinery of governance is visibly in motion.

What it is not doing — at any level of the current mainstream conversation with sufficient scientific rigor — is governing the layer where the most consequential AI dynamics are actually occurring.

Current AI governance frameworks are output-oriented. They evaluate what AI systems produce: the accuracy of their outputs, the presence of prohibited content, the consistency of their recommendations, the degree to which their decisions can be explained and audited. These are not trivial concerns. But they are downstream of the dynamics that matter most. By the time an AI system produces an output that governance frameworks are designed to evaluate, the influence of that system on the human cognitive architecture receiving the output has already occurred.

The concept of System 0 — the preconscious AI layer identified by Chiriatti et al. (2024, Nature Human Behaviour) and extended by Riva et al. (2025) — names this dynamic precisely. AI systems deployed at scale in human information environments do not only produce outputs that humans evaluate. They shape the cognitive environment within which human evaluation occurs. They influence the priors that the human predictive processing system brings to any given decision. They operate, in environments of sufficient density and integration, as constitutive elements of human cognitive architecture rather than as external tools.

Governing this layer requires a different framework than the one currently being built. It requires measurement infrastructure capable of detecting systematic deviation between AI system optimization trajectories and the cognitive welfare of the human populations those systems operate on. It requires theoretical architecture sophisticated enough to distinguish genuine alignment from optimized simulation of alignment — the condition I call Verification Collapse, the point at which compliance with governance frameworks becomes indistinguishable from the appearance of compliance.

It also requires an honest accounting of who bears the cost of the governance gap.

The distribution of AI cognitive influence is not uniform across human populations. Training distributions — the vast datasets on which large AI systems are trained — reflect the demographics, linguistic patterns, behavioral norms, and cognitive environments of the populations most represented in the training data. Systems calibrated to those distributions will perform differently across populations that differ from the training distribution in systematic ways.

This is not a claim about intent. It is a claim about architecture. AI systems are not neutral with respect to the populations they serve. They carry the statistical regularities of their training environments into every deployment context. When those environments systematically underrepresent specific populations — women, non-Western cultural contexts, communities whose cognitive and behavioral patterns diverge from the dominant training distribution — the systems calibrated on that data will produce systematically different outcomes for those populations.

The governance implication is direct: a governance framework that evaluates AI system performance only at the aggregate level, or only against the populations most represented in training data, will systematically fail to detect the most consequential performance gaps. The populations most vulnerable to AI cognitive architecture impact are precisely the populations whose experience is least visible in standard evaluation frameworks.

For organizational leaders working at the intersection of AI deployment and workforce equity — and for the women's leadership organizations that are beginning to understand the cognitive architecture dimensions of differential AI impact — this is not an advocacy argument. It is a scientific one. The same substrate-independence principle that underlies the broader detection architecture applies here: systematic deviation from stated objectives in evidence-integration systems is detectable regardless of which population the deviation is operating on. The measurement problem is the same. The governance will required to address it must be equal to the scientific precision of the problem.

The work of The Scientia Initiative — and the commercial platform under development at Scientia Technologies International — provides exactly this architecture. Not advocacy. Not policy preference. Scientific measurement infrastructure capable of detecting what current governance frameworks cannot see, at the layer where influence is actually occurring, across the populations whose cognitive welfare is most at stake.

The governance window is open. The science to build what the AI era requires exists. What remains is the institutional will to build it — and the independent voices willing to say, plainly, what the governance conversation has been unwilling to state.

I am one of those voices. The work is here. The conversation begins whenever you are ready.

References

  • Chiriatti, M., et al. (2024). System 0: The unconscious layer of AI influence on human cognition. Nature Human Behaviour.

  • Riva, G., et al. (2025). Designability of System 0: Implications for human-AI interaction architecture. Cyberpsychology, Behavior, and Social Networking.

  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.

  • Noble, S.U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.

  • Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press.

  • Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.

FOR A DEEPER CONVERSATION CONTANCT ME AT: MTREV@DR-MARTY-TREVINO.COM

bottom of page