Risks from Human Factors in Highly Automated Systems - Andrew Brown

Risks from Human Factors in Highly Automated Systems - Andrew Brown


As software development becomes increasingly automated, and as software becomes increasingly interconnected through the Internet of Things, what will happen to risks associated with human factors? Do they reduce, increase, or transform in ways that we can neither anticipate nor easily comprehend?

We examine the tragic events of flight AF477 to illustrate that as systems become increasingly automated, they reduce risks due to one set of human factors, but become vulnerable to an entirely different set of human factors. We show how increased automation of tasks leads to a significant reduction in numerous small errors, but this is coupled with an increased opportunity for creating a catastrophic error.

This increased opportunity for catastrophe typically comes via two routes. Firstly, there is an increase in operator load extremities, with decreased load during normal conditions, but increased load at times of crisis. Secondly, there is a de-skilling of roles, which leads to three effects:

  • A reduction in calibre of people required, and hence selected or attracted, into the role.
  • Reduced opportunities to practice techniques, as practise itself becomes more dangerous
  • The role becomes a dulled task, with employees often filling their time with non-work activities, such as Internet and mobile surfing, rather than honing job skills.

We use lessons and parallels from aviation and nuclear power generation to explore the implications of increased automation through DevOps driven software development, as well as investigating potential mitigation strategies available.


  1. As systems become more automated, there is a significant reduction in small errors.
  2. However, there is an increased opportunity for a catastrophic error.
  3. We should learn from industries, such as aviation and medicine, that have previously wrestled with this issue.