The irony of automation: Why do clinicians let computers make mistakes?

This bias increases over time as computers prove their value and accuracy (in other words, their reliability), as they usually do. Computers today, with all of the human-like characteristics of voice and the ability to answer questions or predict our needs (think about how Google fulfills your mind when you’re typing in a query). search), which generates even more trust, sometimes beyond what is worth it.

Increasingly, engineers and psychologists are focusing on the human factor in building machines that are transparent about the reliability of their results. In its 2011 defeat against the current Risk champion, the IBM Watson computer signaled its degree of certainty with its responses. Before his death last month, George Mason University psychologist Raja Parasuraman was working on a Trust-o-Meter computer, in which the machine could have a blue, yellow, or red light, depending on on its reliability.

But that may not have saved Levitt, as the barcode machine probably felt quite certain that it was pushing her to deliver the right dose: 38½ pills. So we are grappling with how to train people to believe when they should, but heed Reagan’s advice to “trust but verify” when circumstances dictate. . The FAA is now prompting airlines to develop scenarios during their simulation training to foster the development of “suitably calibrated”. Medicine clearly needs to address version of the same problem.

In Levitt’s case, her decision to put her trust in the bar code coding system was not due to blind trust; since it was installed a year earlier, the system saved her, as it saved all the nurses at UCSF, many times. Unlike doctors and pharmacists’ prescription warnings and ICU cardiac monitors, with high rates of false positives, nurses often find their barcode warnings to be accurate and of clinical significance. In fact, according to the old paper-based process, the medication administration phase is often the most frightening part of the drug ecosystem, as once the nurse believes he has the right drugs, there are no barriers left. between him and an error – sometimes fatal.

Months after the bug, I asked Levitt what she thought of Epic’s bar coding system. “I think it’s much more effective and safe,” she said. “If you scan the wrong medication, it immediately gives you a warning saying, ‘This is the wrong drug; There are no accepted orders for this drug. ‘So I’ll know, sorry, I scanned the wrong way. It saved me ”.

Levitt not only trusts the barcode system, but also the entire drug safety system of UCSF. Such trust could be another flaw in Swiss cheese. While a safety system can look robust from the outside – with multiple independent tests – many failures create a kind of adverse dynamics when they violate successive layers of protection. That is, towards the end of a complicated process, people assumed, for a confusing order that went this far, it right agreed upon by the people and systems upstream. “I know that a doctor writes a prescription,” Levitt said. “The pharmacist always checked it… then it came to me. And so I thought, it should be like a triple check system, where I’m the last checker. I trusted the other two checks. ”

Levitt brought the medicine-filled rings to Pablo’s bedside. She scanned the first package (each containing one pill), and the barcode machine indicated that this was only a fraction of the correct dosage – the scanner was programmed to find 38½ pills, not one pill. So she looked at each pill, pill by pill, like a checkout in a supermarket handling more than three dozen identical grocery items.

Leave a Comment