top of page
  • Writer's pictureMartin Trevino

Trust in Data Human Factors (Part 2)

Updated: Jul 30, 2023


Individuals make data-informed decisions every day in and out of organizations worldwide. Informing these decisions are dashboards, metrics, measures, indicators, visual analytics, and even Artificial Intelligence. Access to data is often by design, with the strategic intent being to “engrain data-driven decision-making in the DNA of the firm.” Yet, understanding how our brains make data-informed decisions is largely ill-understood by those in management and even those tasked with generating and curating the data, measures, metrics, and visual analytics.

Inherent the challenge of understanding how the brain makes decisions (or does not) with data is, in part, how the management sciences have framed and approached the problem of deciphering data-informed decision-making. In the management sciences, data-driven decision-making is largely approached as a behavioral challenge, with the individual and decision processes in the brain being viewed as malleable. The central tenet advanced by advocates is a permutation of – “if data or evidence were available in the way that the decision-maker desired, they would then make decisions based on the data before them vs. ‘gut feel.’” Unfortunately, this framing is fundamentally incorrect as there are intractable interactions by brain systems. These systems and functions include how our mental models are informed and updated, how the various forms of memory are structured and the various forms function, and how the brain functions when making high-order decisions.

This contrarian view to simply observing behaviors and placing forth non-scientifically founded tenets is validated by countless witness acts of counterintuitive decision behavior ranging from never accessing dashboards specifically designed to their requirements to viewing data that contradict the decision-makers' internal model being rejected in favor of “a gut feeling.” Those in the room often are left in bewilderment at what they just witnessed, never understanding that what was seen is based on the deep cognitive, neurological functioning of the brain.


Understanding how the brain makes data-informed decisions is not simply an esoteric or theoretical exercise. A scientific understanding of how we make decisions with data enables various non-trivial functions, from technology selection to capability development and, perhaps most important – new models of Humans and AI augmentation.

A cognitive neuroscience-level understanding of how the brain perceives, understands, builds knowledge, makes predictions, etc., will underpin the next generation of human/computer complementarity. The idea of complementarity in the form of ‘human-centric’ interface design is not new, but it is only now being grounded in science vs contrived beliefs. We can now specifically design User Interfaces, Dashboards, and even individual Visual Analytics to create higher degrees of resonance by targeting the brain’s decision-making processes and even individual heuristics and biases.[1] Integrating Cognitive Neuroscience into IT, OT, and DX to improve strategic decision-making has immense potential to contribute to operational efficacy, uniqueness, and competitive advantage.


There has been a single overarching error in virtually all efforts to improve data-driven decision-making: to approach the challenge as a technology, data, or visualization problem vs. a deep understanding of how the brain interrogates, trusts, and then makes decisions with data. Ask any Data Scientist what they need to improve data-informed decision-making, and you will get back a variation of “we need more data, labeled data, in relevant time, better methods, models,” etc. At no point will the response be, “we need a deeper understanding of precisely how the Neocortex updates its internal models when presented with variation to its predictions.” We are attempting to solve an internal human brain mystery through variations of external data and methods.

To discuss this topic in depth would require multiple dissertations. Still, we can simplify some of the intractable functioning of the brain when analyzing data and apply this understanding to make immediate operational impacts in virtually any business environment. We can begin improving our UX/UI design and data-driven decision-making efforts with the following cognitive truths:

  • Every person interrogates data differently.[2] Areas of focus and duration of examination vary greatly between individuals. Thus our ‘outcomes’ and ‘takeaways’ also differ.

  • The brain examines data and, unconsciously, predetermined outcomes (often, we see what we want to see in data).

  • These outcomes are intended to validate our presuppositions and beliefs – thus, seeing is not believing.[3] We cognitively ignore and dismiss data that does not agree with our mental models and presuppositions.

  • The brain will almost always select System 1 (rapid decisions based on knowledge, beliefs, heuristics, bias, and preference) vs. System 2 (slower, deep thinking, analytical decisions) when making decisions.[4]

  • The brain uses 6x more information from what it believes to be true vs. what it sees with its eyes when updating its mental model.

  • Decision errors are ubiquitous and range from improper framing to optimism bias to WYSIATI and many more.[5]

“We’ve greatly increased the color options for charts on our dashboards. This will improve decision-making.”

-Senior Sales Executive - Self-Service BI Company


When system designers talk about “Human Centric Design” and enabling “Humans to do what they do best and Computers to do the same,” most have little idea beyond simplistic tenants of what such as statement implies. In designing next-generation analytics and human/AI augmentation, we must move beyond simple tenants through a deep understanding of how humans make decisions, develop knowledge and trust data and technology.

We can now conceptualize Human/AI Complementarity to enable data interrogation and exploration by a working pair of human and ‘mated’ AI.[6] An example of this Complementarity is the structuring of novel user interfaces and analytics to enable the brain’s hemispheric functions, maximize short-term and working memory, force system2 usage, and create higher degrees of resonance through 3D and 2D interfaces and techniques such as juxtaposition, geo-temporal and time-based sequencing. A central tenet of cognitive neuroscience is that the brain explores the world through movement. Yet, for some reason, the designers of immersive gaming dutifully understand this tenet, and UX/UI analytics systems designers are oblivious to it. A relevant question is, “Could we improve decision-maker understanding and situational awareness if we designed next-generation analytics systems with this understanding as a data exploration basis?

To this end, specific tools and techniques are emerging that are of immediate use; these include dimensional scaffolding (borrowed from Biology and the medical field) and 2- and 3-dimensional geo-temporal exploration as a central UI element. The latter is intended to enable a form of storytelling while dealing with the complex issues of dimensionality, time, and our limited memory.

This latest treatment of Complementarity with a cognitive neuroscience basis extends far beyond what is currently fielded and warrants extensive exploration by any organization seeking to be at the forefront of data-informed decision-making and human/AI working models. , The potential outcomes are proprietary working models that produce increased productivity, performance, and innovation. Next-generation analytic interfaces will dynamically design and update the visual and temporal elements of data/contextual information to the unique user-based and/or persona, “day-in-the-life” profiles, job roles, industry verticals, and peer behaviors. Individually specific AI will algorithmically recommend and highlight data elements for inspection, aggregate and separate contextual information according to priority/relevance and highlight this to the user with temporal specificity. In addition, the system will generate specific Visual Analytics to which the user has shown a preference, such as Juxtaposing vs. Time Sequencing of data. Human/AI Augmentation is designed to engender trust in data and work ‘with’ the brain in its decision processes, intractable elements, and how it perceives data/information. Trust poses fascinating challenges, as the latest research shows that trust in technology and data is formed in vastly different ways than trust in people.


As we conceptualize and design the next generation of analytic systems, we do so from the new lens of Complementarity and an inside-out, human-centric lens. As enterprise risk has risen, insurance and other costs spiral upward, it is not enough to make incremental improvements in our system designs to improve decision-making. Can we create true disruption in the best sense by altering our approach to data-informed decision-making at the highest levels? In the next series of compositions, we will explore the application of Multi-Modal Cognition as a core element of next-generation analytic interfaces and decision-making.


[1] Reference to Daniel Kahneman’s groundbreaking work.

[2] This is huge because it changes the unit of measure from X to 1. It forces us to focus on enabling each decision-maker vs. building generic systems/interfaces. I.E., – no two CFOs or CISOs are identical in their data exploration, informing their mental models and determining action based on data.

[3] See the research of Russian Psychologist Alfred Yarbus

[4] According to Kahneman, System 1 is rapid, heuristic-based, an largely unconscious – it is both accurate and prone to error. System 2 is slow, requires more energy, is a detail-oriented examination of data/analytics or information regarding a high-risk topic.

[5] WYSIATI – What You See Is All There Is (Daniel Kahneman)

[6] Specific to the User, persona, and domain.

[7] A reference to the excellent book Zero to One by Peter Thiel and Blake Masters.


Recent Posts

See All

Trust in Data & Technology - A Human Factor Examination

INTRODUCTION Among the antecedents of good decision-making in any organization is Trust in Data and Trust in Technology. Failing to trust the data, analytics, KPIs, metrics, measures, or the technolog


bottom of page