Automation Surprise & Mode Confusion
The Bias
- Bias: The operator expects one behavior from an automated system but observes another, or does not understand which mode the system is operating in, even though the system is functioning correctly according to its programming.
- What it breaks: Safety of critical systems, operators' situational awareness, trust in automated systems, and the ability to respond quickly to unexpected equipment behavior.
- Evidence level: L1 — confirmed by formal verification methods, experimental studies in aviation and medicine, and analysis of real incidents. Over 15 peer‑reviewed studies, including work by Rushby, Dubus, and field surveys in aviation.
- How to spot in 30 seconds: The system behaves differently than you expected; you are unsure which mode the automation is in; you ask yourself “what is it doing?” or “why is it doing that?”; confusion arises even though the system is operating “correctly.”
When automation does not do what you expect?
Automation surprise occurs when an automated system behaves in a way that differs from the operator’s expectations or mental model of the system. A pilot expects one behavior but observes another, leading to confusion and potential safety risks (S003, S007). This mismatch between expected and actual behavior can happen even when the system is functioning perfectly according to its programming.
Mode confusion is a specific subtype of automation surprise in which the operator is unaware of the system’s current mode or misinterprets it. This can lead to inappropriate control actions and is linked to the complexity of mode logic in modern flight‑control systems (S001, S004). Studies show that mode confusion is especially hazardous in commercial aviation, where flight‑control systems have many interrelated modes.
A third related phenomenon — GIGO (Garbage In, Garbage Out) — embodies the principle that erroneous input data inevitably leads to erroneous output, regardless of system sophistication. In aviation this is usually tied to pilot data‑entry errors (S005). The system processes the incorrect data correctly, the output appears valid, but it is based on faulty input parameters.
All three phenomena are most common in highly automated critical systems — commercial aviation, medical equipment, nuclear power, and military control systems. Field surveys of pilots showed that most associate automation‑surprise events with data‑entry errors, underscoring the interrelation of the three phenomena (S002). Formal analysis using model‑checking methods demonstrated that these issues can be identified during system design, indicating a systemic rather than incidental nature (S006).
It is critical to recognize that these phenomena are not merely “operator errors” but fundamental human‑automation interaction problems in safety‑critical systems. They occur even among well‑trained, competent operators and point to shortcomings in human‑machine interface design. Experimental studies have confirmed a link between automation surprise, mode awareness, and overall situational awareness, demonstrating the multi‑layered nature of the problem.
Operators often experience illusion of control, believing they fully understand the automated system’s logic, which hampers detection of mode confusion. The link to confirmation bias appears as operators seek confirmation of their expectations about the system’s mode, ignoring contradictory cues. Hindsight bias frequently leads to faulty post‑event analysis, making it seem that the danger was obvious after the fact.
Mechanism
The Cognitive Architecture of Surprise: How the Brain Loses Control Over Automation
The neuropsychological mechanism of automation surprise and mode confusion is rooted in the fundamental process of forming and using mental models—internal representations of how a system works. When a person interacts with a complex automated system, the brain creates a simplified model of the system’s behavior based on prior experience, training, and observation (S003, S004). This mental model enables prediction of the system’s behavior and planning of actions.
The problem arises when the operator’s mental model is incomplete, inaccurate, or outdated. Modern automated systems, especially in aviation, exhibit extreme complexity with numerous interacting modes, conditional transitions, and hidden states. The human brain, optimized for relatively simple cause‑effect relationships, struggles to construct an accurate model of such complexity (S005).
The Gap Between Expectation and Reality: When Intuition Fails
Automation surprise and mode confusion feel especially disorienting because they violate the basic expectation of predictability. When a system behaves contrary to expectations, it creates cognitive dissonance—a conflict between what should happen (according to the mental model) and what actually occurs. The brain interprets this as “surprise,” triggering attention and stress systems.
Intuitively, it seems that if we understand a system correctly and interact with it properly, it should behave predictably. This intuition holds for simple tools (a hammer, a bicycle) but breaks down for complex automated systems with multiple modes. An operator may be completely confident in his understanding of the system’s current state because his mental model is internally consistent and based on prior successful experience (S001).
Cascading Effect: From Confusion to Loss of Situational Awareness
An experimental study by Dubus et al. (2024) demonstrated a direct link between automation surprise and reduced situational awareness. Participants who experienced unexpected automation behavior showed a marked decline in their ability to track the overall situation and make appropriate decisions (S006). This indicates that automation surprise does not merely cause brief confusion but can have a cascading effect on the operator’s cognitive processes.
A study by Leadens (2020) involving commercial airline pilots uncovered an important pattern: most respondents reported that their experience of automation surprise was linked to manual entry or data selection errors rather than system malfunction. This confirms that the problem often originates with the human factor but is exacerbated by insufficient feedback from the automation about what was entered and how the system interpreted it (S002).
Predictability of Complexity: What Formal Methods Reveal
Formal verification methods employed by Rushby uncovered potential mode‑confusion scenarios during system design. By using model checking, the researchers systematically explored all possible system states and identified situations where system behavior might not align with reasonable operator expectations (S007). This confirms that many instances of automation surprise are predictable consequences of design complexity rather than random events.
| Factor | Impact on Mechanism | Manifestation |
|---|---|---|
| System complexity | Mental model does not cover all modes and transitions | Operator does not anticipate rare states |
| Hidden states | System operates in a mode invisible to the operator | Operator actions do not yield the expected result |
| Feedback | Insufficient information about what the system understood | Operator does not realize the input error until the surprise occurs |
| Cognitive load | Stress and haste reduce the accuracy of the mental model | Even experienced operators make mistakes |
| Experience and training | Successful experience reinforces an incomplete model | Operator becomes confident in a mistaken understanding |
Research in anesthesiology has shown that the same phenomena—automation surprise, mode confusion, and illusion of control—appear in operating rooms that use automated anesthesia delivery systems. This confirms the universality of these cognitive distortion mechanisms across various high‑stakes domains where humans interact with complex automation (S005).
The link to confirmation bias is especially significant: an operator who has formed a mental model tends to notice information that confirms it and ignore signals that contradict it. This reinforces confidence in a mistaken understanding of the system and makes the surprise even more unexpected when the system finally behaves contrary to expectations.
Domain
Example
Real Cases: When Automation Fails
Scenario 1: Aircraft Crash Due to Flight‑Mode Confusion
In 1996, the crew of a modern airliner programmed the flight management system (FMS) to descend from cruise altitude of 35,000 feet. The pilots intended to set a vertical speed of 800 feet per minute, but the system was in a different mode than they assumed (S005). Instead of controlling vertical speed, the autopilot interpreted the input as a target pitch angle, and the aircraft began descending at 6,000 feet per minute—seven times faster than planned.
The crew did not notice the problem immediately because their mental model told them the system was functioning correctly. Visual indicators on the instrument panel were ambiguous regarding the active mode (S001). When the pilots finally realized the rapid descent, their attempts to correct the situation were hampered by a lack of understanding why the system behaved that way. This is a classic automation surprise: the system operated correctly according to its logic, but not at all as the crew expected.
The consequences were catastrophic. The aircraft struck a mountain ridge, killing 229 people. The investigation showed that modern flight‑control systems have many modes with subtle differences in logic, and transitions between them can occur automatically based on conditions that are not always obvious to pilots. The number of possible states and transitions exceeds a human’s capacity to maintain a complete model in working memory.
Scenario 2: Wrong Route Due to Input Error and Blind Trust
A driver uses GPS navigation for a trip to an unfamiliar city and accidentally selects the wrong street with a similar name—“Sadovaya Street” instead of “Sadovoy Passage.” The navigation system, receiving this input, correctly calculates an optimal route, but to the wrong destination (S002). The driver follows the directions, trusting the automation and not checking the address on the map.
This is a classic example of GIGO—“garbage in, garbage out”. The system functions flawlessly, the algorithms work correctly, but the result is useless because the input data were erroneous. The driver may not notice the problem for a long time, especially in an unfamiliar area where he has no independent way to verify the route. When he finally arrives at the unintended location, an automation surprise occurs: “Why did the navigation bring me here?”
This scenario demonstrates the link between input error and automation surprise. The error originated with the human factor but was amplified by automation that did not provide sufficient feedback to detect the mistake early (S003). Modern navigation systems rarely ask for confirmation such as “Are you sure you want to go to Sadovaya Street?” They assume the user’s input is correct and act accordingly.
Scenario 3: Inadequate Anesthesia During Surgery
In the operating room, an anesthesiologist uses an automated drug‑delivery system. These systems have several operating modes: manual control, automatic maintenance of a target concentration, rapid‑induction mode, and gradual‑dose‑reduction mode. The physician switches the system to what he believes is the maintenance‑of‑stable‑concentration mode, but because of the preceding sequence of actions the system is actually in gradual‑dose‑reduction mode (S005).
Confident that the system is maintaining a stable level of anesthesia, the anesthesiologist shifts attention to other aspects of the procedure. Meanwhile, the anesthetic concentration slowly declines. When the patient begins to show signs of insufficient anesthesia—tachycardia, elevated blood pressure, limb movements—the clinician experiences an automation surprise. Confusion of modes at a critical moment can delay corrective actions and lead to complications for the patient.
This example shows that automation‑surprise problems are not limited to aviation—they are universal challenges in any domain where complex automation interacts with a human operator under high stakes. Research confirms that the same cognitive mechanisms appear in medicine, underscoring the need for a multidisciplinary approach to addressing these issues (S004).
Scenario 4: “Vacation” Mode Activated Instead of “Evening” in a Smart Home
The owner of a modern smart home has configured a sophisticated automation system with many scenes: “morning,” “day,” “evening,” “night,” “vacation,” “guest,” and others. Each scene controls lighting, temperature, security, and other systems differently. One evening he activates the “evening” scene, expecting soft lighting and the heating system to set a comfortable temperature.
Instead, all lights turn off, the temperature drops to the minimum, and the security alarm is armed. It turns out the system interpreted the sequence of his prior actions—checking security sensors and closing windows—as preparation for the “vacation” scene. His command “evening” was taken as confirmation of that mode. This is an automation surprise: the system behaved logically according to its programming, but not at all as the user expected.
This everyday example illustrates how increasing automation complexity can paradoxically reduce its usefulness and predictability (S006). The more modes and conditional transitions a system has, the harder it is for the user to maintain an accurate mental model of its behavior. The problem is exacerbated when systems try to be “smart” and anticipate user intentions, often leading to unexpected and undesirable outcomes.
Red Flags
- •The operator fails to verify the system’s current mode before issuing a critical command.
- •The system behaves differently than the operator expects, yet the operator continues to follow the old procedure.
- •The operator assumes the automation has completed the task without receiving explicit confirmation.
- •The operator ignores warning signals, relying on the equipment’s usual behavior.
- •The operator doesn’t realize the system has switched to a different operating mode and makes incorrect decisions.
- •The system executes the command in an unexpected mode, but the operator doesn’t notice.
- •The operator trusts the automation more than they verify its actual state and actions.
Countermeasures
- ✓Develop explicit system behavior models for each operating mode and regularly verify that actual behavior matches the documented specifications.
- ✓Implement a mode‑indication system that provides continuous visual and audible feedback to the operator about the current operating mode.
- ✓Conduct regular operator training on simulators that include unexpected mode transitions and automation failures.
- ✓Create a protocol that requires explicit confirmation of critical commands by the automated system before execution.
- ✓Set automatic fallback thresholds that switch the system to manual mode when a mismatch between expected and actual behavior is detected.
- ✓Document and analyze every instance of unexpected system behavior to uncover hidden modes and transitions.
- ✓Introduce a two‑layer verification approach: formal code verification methods combined with empirical testing on real‑world scenarios.
- ✓Develop clear recovery procedures for each known type of mode‑confusion, with step‑by‑step instructions for operators.
Sources
- /sources/10-1016-s0951-8320-01-00092-8
- /sources/10-1590-jatm-v14-1282
- /sources/10-1145-2702123-2702521
- /sources/10-3390-safety3030020
- /sources/10-1016-s0952-8180-96-90009-4
- /sources/10-20944-preprints201703-0035-v1
- /sources/-automation-surprise-in-aviation
- /sources/learning-from-automation-surprises-and-going-sour-accidents-in-cognitive-enginee