Automation Turned Deadly: MCAS and the Max

A.I
Automation Turned Deadly: MCAS and the Max
A late redesign of Boeing’s MCAS automation — made more aggressive and dependent on a single sensor — played a central role in the two 737 MAX disasters, and it offers urgent lessons for how we build and govern safety-critical automation.

How an automated safety feature became part of a catastrophe

When two Boeing 737 MAX airliners crashed within months of each other in 2018–2019, investigators traced a common thread to the aircraft’s flight‑control software. A system designed to help the plane handle differently shaped engines — the Maneuvering Characteristics Augmentation System, or MCAS — was reworked late in development in ways that made its behaviour more assertive and, critically, reliant on a single angle‑of‑attack sensor. Those changes removed layers of protection and left crews exposed to a fast, confusing failure mode that they had not been trained to recognise or counter.

MCAS: what it was supposed to do

MCAS was introduced to help the MAX handle like earlier 737 models after larger, more forward‑mounted engines changed the aerodynamics. Its nominal job was modest: when certain conditions suggested the nose might pitch up too far, MCAS trimmed the horizontal stabiliser slightly nose‑down to keep handling consistent for pilots. In its original conception it was meant to be subtle and to operate rarely.

How the system changed — and why that mattered

During development Boeing expanded MCAS’s role. The implemented version could activate more frequently and apply larger stabiliser inputs than earlier drafts, and it was wired to react to data from a single angle‑of‑attack sensor rather than cross‑checking multiple sources. In both the Lion Air and Ethiopian Airlines accidents, erroneous angle‑of‑attack data triggered repeated nose‑down inputs that pilots fought but could not overcome in time. The shift from a conservative, redundant design to a more aggressive, single‑sensor implementation was a decisive factor in the failures.

Why calling MCAS “A.I.” is misleading — and why the label still matters

Across media and public debate the crashes sometimes get framed as an “A.I.” failure. That shorthand is tempting but imprecise. MCAS was not a machine‑learning model that trained itself from data; it was deterministic flight‑control logic: rules coded to act on specific sensor inputs. The danger, however, is the same one people worry about with opaque AI systems — automated behaviour hidden from end users and regulators, interacting with messy real‑world signals in ways its designers did not fully anticipate.

Labeling MCAS as merely “automation” can underplay how design choices — especially around transparency, redundancy and human‑machine interaction — turned a protective feature into a hazard. The flights reveal that even non‑learning algorithms demand rigorous safety engineering, the same kinds of traceable requirements and independent testing we now ask of AI systems in other domains.

Organisational and regulatory failures amplified technical flaws

Technical choices did not occur in a vacuum. Multiple reviews and hearings found that problems in oversight, communication and corporate culture amplified the risk. Regulators were not always presented with the full details of MCAS as it evolved; pilot manuals initially omitted the feature; and assumptions that MCAS would rarely activate reduced pilot training on how to respond when it did. These institutional breakdowns turned an engineering mistake into a public‑safety crisis.

The fixes Boeing and regulators implemented

After the grounding of the MAX fleet, Boeing and aviation authorities required software and operational changes. The revised design limits MCAS so that it only acts when both angle‑of‑attack sensors agree, restricts it to a single activation per event, and moderates the magnitude of trim inputs. Regulators also tightened requirements for documentation, pilot training and independent verification before the type was cleared to return to service. Those changes addressed the immediate failure modes but do not erase the broader governance questions exposed by the crisis.

Lessons for the wider AI and automation debate

The Max story is a cautionary primer for anyone deploying automation at scale. Four lessons stand out:

These are familiar refrains in AI ethics and safety circles, but the 737 MAX shows they are not abstract. In safety‑critical systems the costs of getting them wrong are immediate and final.

Where the conversation should go next

Technical fixes have returned the MAX to service under stricter conditions, but the episode remains a bench‑mark for how not to manage automation in regulated industries. For policymakers and engineers the imperative is to translate lessons into enforceable standards: clearer certification pathways for automated decision systems, mandatory reporting of substantive design changes, and institutional structures that reduce conflicts of interest between manufacturers and certifiers.

For journalists and the public, it is also a reminder to be precise about terms. “AI” grabs headlines, but the underlying problem in the MAX case was not artificial intelligence in the machine‑learning sense — it was a combination of aggressive automation, poor transparency and weakened safety practices. Treating that combination as an engineering and governance challenge gives us a more productive path to prevent a repeat.

Conclusion

The 737 MAX disasters were not inevitable. They were the outcome of specific design decisions, unchecked assumptions and institutional failures. As automation and AI push into more domains, the MAX case should stand as a stark example: safer systems emerge not from confidence in a piece of code but from conservative design, clear communication with users, independent review, and robust regulatory oversight. Those are not technical niceties — they are preconditions of public safety.

Mattias Risberg

Mattias Risberg

Cologne-based science & technology reporter tracking semiconductors, space policy and data-driven investigations.

University of Cologne (Universität zu Köln) • Cologne, Germany

Readers

Readers Questions Answered

Q What was MCAS supposed to do on the 737 MAX?
A MCAS was designed to help the MAX cope with changed aerodynamics from larger, forward-mounted engines. Its nominal job was to trim the horizontal stabiliser slightly nose-down when sensors suggested the nose might pitch up too far, keeping handling consistent with earlier 737 models. In its original conception it was meant to be subtle and to operate rarely.
Q How did MCAS change and why did that matter?
A During development Boeing expanded MCAS’s role, making it activate more frequently and apply larger stabiliser inputs, and it was wired to rely on data from a single angle‑of‑attack sensor rather than cross‑checking multiple sources. Erroneous AoA data then triggered repeated nose‑down inputs that pilots fought but could not overcome in time.
Q Was MCAS AI?
A MCAS was not a machine‑learning model; it was deterministic flight‑control logic, with rules coded to act on specific sensor inputs. Yet it raises concerns about automated behavior that pilots and regulators may not anticipate, echoing worries about opaque AI systems. Labeling MCAS as automation can underplay design choices around transparency, redundancy, and human‑machine interaction.
Q What fixes were implemented after the MAX grounding?
A After grounding, software changes limited MCAS to act only when both angle‑of‑attack sensors agree, restricted it to a single activation per event, and reduced trim input magnitude. Regulators tightened documentation, pilot training, and independent verification before clearance to return to service. These steps addressed immediate failure modes but did not fully resolve governance questions.
Q What broader lessons does the MAX case offer for AI and automation?
A Four lessons stand out: in safety‑critical systems the costs of getting them wrong are immediate and final; governance must translate lessons into enforceable standards; clearer certification pathways for automated decision systems and mandatory reporting of substantive design changes are needed; and institutional structures should reduce conflicts of interest between manufacturers and certifiers.

Have a question about this article?

Questions are reviewed before publishing. We'll answer the best ones!

Comments

No comments yet. Be the first!