Dr Alper is a practicing internist in Burlingame, Calif, and a Robert Wesson Fellow in Scientific Philosophy and Public Policy, Hoover Institution, Stanford University, Palo Alto, Calif.
It’s fashionable to denigrate the primitive state of healthcare computerization. According to some commentators, making the right diagnosis and prescribing the right drugs ought to be automatic in this age of high technology. Too often we make decisions with flawed or missing information and with deficient medical knowledge, they claim.
There’s no doubt that our critics have a point. The power of information technology is awesome. So much so that, at a minimum, it will make steady inroads into paper-based medical practice—whether we speak of patient records or the information in medical textbooks. But what really excites futurists are decision support systems (DSSs).
What exactly are they? According to one source—TechWeb’s TechEncyclopedia (www.answers.com)—a DSS is “an information and planning system that provides the ability to interrogate computers on an ad hoc basis, analyze information and predict the impact of decisions before they are made.” The explanation continues: “Database management systems let you select data and derive information for reporting and analysis. Spreadsheets and modeling programs provide both analysis and ‘what if?’ planning. However, any single application that supports decision making is not a DSS. A DSS is a cohesive and integrated set of programs that share data and information.”
Imagine how that would apply to medicine. The computer might analyze a patient’s symptoms, physical and lab findings, and special tests, and come up with a diagnosis. Using all the rest of the patient’s data and a search of the medical literature, it would prescribe a treatment and calculate the odds of success. Of course, all this would take place in real time, and it would not raise costs, even as care improves.
Not likely anytime soon. DSSs in medicine are a lot narrower in scope and have a spotty record. I know this first-hand, because I have been involved with 3 of them over the course of nearly a decade. The first was the Quick Medical Reference (QMR)—the evolutionary heir to Internist I, which in the 1970s was the first program to apply artificial intelligence to making medical diagnoses. “Findings”—meaning patients’ symptoms, physical exam characteristics, and diagnostic test results—could be entered into a computer program and subjected to various algorithms that would calculate the likelihood of various diagnoses whose profiles had been entered into the knowledge base.
The QMR was developed at the University of Pittsburgh, initially by essentially downloading the medical brain of Jack D. Myers, MACP, a pioneer in medical informatics who was once dubbed “the smartest internist in America.” Thereafter, generations of medical students and house staff would do literature searches to create disease profiles. These profiles were reviewed in conferences with faculty and adopted by consensus.
How to score the significance of the findings was the biggest initial problem. That was resolved by adopting a “heuristic” approach—attempting to mimic reality by creating probability ranges (such as rarely, somewhat likely, about 50-50, very likely, almost always). The heurists argue that, for example, simply averaging the likelihood of an enlarged spleen in several reported series of infectious mononucleosis cases rather than preserving the range of frequencies reported by different observers creates an unjustified sense of certainly—one that can lead the clinician seriously astray when a chain of averages is used in serial calculations.
That is true both for frequency data and for “evoking strengths,” the other diagnostic parameter that scores the “specificity” of particular findings based on a combination of how many diseases share the finding and how prevalent those diseases are.
The heurists have battled the “probabilists” (those who believe in extracting absolute probabilities from uncertain data), the far more widespread method, and the one that is used in other diagnostic DSSs, for example, in Iliad and DXplain. It turned out that the QMR was the most likely system to lead to the correct diagnosis, provided that the diagnosis was included in the program. And there the QMR fell short. Its competitors added more diseases.
Several hundred physicians subscribed to the QMR in its heyday 15 to 20 years ago. Sensing a commercial opportunity, Joe Hirschmann, then president of First Databank, a forward-thinking pharmaceutical database company, had bought the QMR and dreamed of using it in combination with electronic medical records (EMRs) to automatically generate diagnoses. That turned out to be wishful thinking.
The EMR never quite caught on as expected. The 2 QMR physician-informatics who came from Pittsburgh became less and less clinical over time. I was brought in as an active clinician to rectify that and to supervise the knowledge-based researchers. But by then sales had fallen, prospects were uncertain, and the ambivalence this created led to an unwillingness to devote sufficient resources to keep the program state-of-the-art. The last update was issued in 2001.
It is sad. The QMR was really neat. Decades of effort went into it. And calls still come from around the world inquiring about it and requesting permission to use it. The reason for telling the QMR story at such length is that it sheds light on what DSSs we in practice get to see, how they come and how they go, and why.
In the first place, the QMR was conceived at a time when Marcus Welby, MD, the paragon physician who treated one patient a week on TV, was actually a believable character. Internists generally were portrayed as genial pipe smokers, sitting before a crackling fire in the late evening doing what they loved best—thinking about their patients. For such doctors, what could be a better Christmas gift than a computer and a QMR?
The truth was that it was time-consuming to enter patient data and to wait for computer processing. It took enormous motivation to use the QMR, because up to 6 months were needed to become skillful with sophisticated operations like critiquing diagnoses and comparing diseases that are diagnostic competitors. Nor did the program produce a diagnosis in most complex cases. It listed potential diagnoses ranked according to best fit, with the hope that the correct diagnosis would be among the top dozen or so.
All this was immensely fascinating to the academics that created the program. They had day jobs and never had to meet a payroll. Medical students and house staff provided free labor to create the knowledge base. Quality control was ad hoc, and writing papers about the experience was a high priority.
It was a different story in the commercial domain, where return on investment is never far from management’s mind. The research to add new disease profiles was costly, and keeping the existing database up-to-date without a large committee of consultants was an uphill battle. Nor did the electronic functionality fare much better. Some program bugs did not get fixed. Customer support lagged, and the product that had once intrigued me ended up embarrassing me.
Still, a few computer-savvy doctors who were early adopters and true believers succeeded in making the QMR work in their practices. One developed patient questionnaires that were filled out in the waiting room and could later be analyzed if necessary. Another wrote a whole software package that incorporated the QMR into his own EMR. For those doctors, the demise of the QMR must have been especially difficult.
I think it’s ironic that it took a lot of brain power to correctly use a diagnostic tool that was intended to substitute for, and even improve upon, the ability of the human brain. The QMR, and every other attempt to date, have failed to do so. But that may be because they have been overly ambitious and therefore have all been trying to do the right thing in the wrong way.
More on this next month.