How human factors engineering mitigates the 'Swiss cheese effect'
On one level, the concept of human factors engineering would seem fairly self-explanatory. Products and systems are designed to be used by or for human beings and therefore it would seem obvious that building the physical needs and requirements of the user into a design should be a basic facet of the designer's brief.
All too often, however, this is not the case. Of course, products and systems have always had to make allowances for the physical presence of a human user or occupant, but there is a lot more to human factors than simply treating the user as just another physical component that needs to be incorporated into the Bill of Materials.
Humans – by virtue of how they work – are often the single biggest point of failure in any system. IT professionals are painfully aware of this fact; to the point that they have devised the acronym 'PICNIC' to outline the problem. Its meaning – 'Problem In Chair Not In Computer' – is an amusing summation of the issue, but nonetheless points to a larger truth: human beings are a major problem facing anyone designing technology to cater for them.
The realisation of the negative consequences that can result from poor human factors engineering is what has given the discipline its impetus. Even so, many engineers do not necessarily understand what it is. Ryan Meeks, human factors engineer with leading engineering consultancy Frazer Nash, offers a definition: "It's the study of how humans interact with systems, products and environments. We try and look at systems from a human perspective, so we take physical, cognitive, organisational and environmental perspective on things. So we look at how those things will affect the human within the system."
Born of disaster
The discipline was born out of close investigation of historical industrial disasters where human failure was a major factor, such as Three Mile Island, Piper Alpha, Chernobyl, Bhopal and – in particular – air crash investigation, whose methods human factors engineers have borrowed extensively.
As a consequence of this history, human factors engineering is much more widely accepted in safety-critical industries than in others. Says Meeks: "We often work in high-hazard, high-risk industries. So you'll most commonly find human factors people in defence, oil and gas – anywhere you've got real safety implications, really."
According to Meeks, the single biggest problem when it comes to human factors engineering is understanding what human beings are and are not good at. He says: "As humans, we can offer semantic meaning to quite disparate bits of information. So if you've got several systems telling you different things, what humans are really good at is amalgamating that information and making something of it. Often systems use humans in the wrong way – so they might use us for overseeing processes or in an observing role. We're really not good at those things, as we're prone to fatigue and boredom and demotivation. So we try and design systems that capitalise on our natural abilities."
So what form does this take? The first step is to define the human-related requirements of the system. "We sit down and decide what the systems need to do to aid decision making or situational awareness," says Meeks. "How big do the controls need to be? When does the user need to be presented with information in order to make the right decision? What's the environmental condition going to be in that vehicle and how is that likely to affect the user?"
End results
By taking account the physical, cognitive, organisational and environmental factors as subject headings and from them drawing up a list of requirements that the design has to adhere to throughout the whole of the design process. This ensures ultimately that you end up with a product that is useable.
The example of good human factors design to which many point is Apple. Says Meeks: "Good design is not an accident. It's all in the approach. HF [human factors] is not rocket science. If you assess the requirements of the human user at the earliest stage, then you're much better able to deliver a good final product. Apple do this well. Its 'design for all philosophy' involves engaging with the end user at the earliest possible stage of the design process."
Other admirable features of Apple's design, he believes, lie in their products to be both consistent and customisable. He says: "The format of how you navigate is always the same, which means you can't get lost in the menus. Equally, though, you've got the ability of individual users to customise the product, while retaining the fundamental similarities in the system that allow everyone to use it effectively."
Equally, says Meeks, a product such as the iPad offers a great example of how to keep users engaged. He says: "The 'Homer Simpson' example of a worker bored in a chair in an oversight capacity is the worst possible way of doing things. You don't want passive users. One of the fundamental principles of human factors is to keep users engaged physically and cognitively. That way, if something goes wrong, they're in the best position to have a mental model of what's going on and act appropriately.
"Again, with the iPad, it's all about keeping users engaged. Huge use of graphics, diagrams, colour – all of these things communicate messages without using words, thus crossing language barriers and getting information across more immediately than words ever can.
Of course, this is all very well for Apple and its products, but what relevance does that have in other contexts? Meeks draws the comparison with the design of a nuclear submarine (a project with which he and Frazer Nash are familiar). "The human remains the same," he says. "Apply principles in one area, you can apply them in another. The thing to remember about a complex system like a nuclear submarine is that it is ultimately one system, but consists of thousands of smaller systems. So if you approach the design of the smaller ones in the way I've outlined – engage users at an early stage and determine your human requirements nice and early – you can ensure that all those smaller systems are designed in the same way."
Thus, control rooms on submarines now use software interfaces almost entirely now, so the iPad example of consistency and ease-of-use that allows users to move from one system to another while still intuitively understanding how each system operates is highly relevant here.
Equally, the increasing need to reduce the manning levels on submarines has made a level of interoperability between systems necessary. Says Meeks: "Historically, people have had individual specialisms, but there is an increasing drive for lower levels of manning on submarines in particular, so universal interoperability is a long-term goal. That means smaller, more flexible teams and that might mean responsibility for three or four systems. That means the systems have to be automated more.
"Also, this means you can't necessarily have one person staring at a Sonar screen all the time, so you may need to design a system that just gives the user the most pertinent information at any given time, thus freeing them up to do other stuff."
Of course, though, one area where an application such as a nuclear submarine is distinct from other systems is in the level of stress and fatigue to which its users may be subject.
Says Meeks: "You have to consider the context of use very early. Often it's hard to find comparisons from other areas. But there are similarities between putting someone in the control room of a nuclear plant and putting them in the cockpit of an aeroplane - there are a lot of similarities between those high-hazard, safety critical industries."
"The military has that unique issue of doing that stressful job 24 hours per day. So from an HF point of view that might just be a question of setting lower levels of tolerance, for instance – including more levels of mitigation to allow for stress and fatigue. What that might mean is that you have a lower tolerance for human error."
To achieve this, Meeks refers to the 'Swiss Cheese Model', whereby a system's defences against failure are modelled as a series of barriers, represented as slices of cheese. The holes in the slices represent weaknesses in individual parts of the system and are continually varying in size and position across the slices. The system produces failures when a hole in each slice momentarily aligns, permitting 'a trajectory of accident opportunity', so that a hazard passes through holes in all of the slices, leading to a failure.
Says Meeks: "Our job is to put in more cheese. The more cheese you put in, the less chance of getting a pencil through it. So if you think about a software system, that means putting in more layers of mitigation and automated systems that will pick up human error to support the user in a more robust manner."
For all its advantages, however, human factors are still not widely incorporated into engineering design. In part, Meeks believes, this is due to a cultural issue whereby engineers are not trained and conditioned to think in terms of human use of products. He says: "Often I find there is resistance from engineers, who are often suspicious of the psychological, cognitive and almost philosophical ideas that underpin human factors engineering."
Naturally, Meeks makes it clear that human factors must be considered at the earliest stages of design, saying: "HF is often brought in late as a luxury. The problem with that, of course, is that they may then identify major problems or risks that, at a late stage, will cost you a lot of money to rectify. So the best practice is to get us involved early."
However, it seems clear that there are huge advantages to good human factors engineering. As Meeks puts it: "Essentially humans don't change. Whatever system we are put in, our natural abilities are always the same. Obviously training can alter our levels of knowledge, but in essence it's all about making the best use of the human by giving them the information and opportunities they need to make those all-important contextual and semantic decisions they're good at."