Medical decision support software is the term used for computer programs and associated databases that help make diagnoses and advise on treatment.
Medical decision support software is the term used for computer programs and associated databases that help make diagnoses and advise on treatment. Because such software deals with skills that are central to the role of doctors, it tends to be the focus of hopes and fears about medical software. In 1970, one of the pioneers of medical informatics, William B. Schwartz, MD, expressed some of these hopes and fears in a New England Journal of Medicine article, in which he looked ahead to the year 2000 and sketched out the hope of computers being used to “assist in diagnosis and management.” He also raised two concerns. One was that being a doctor would become less skilled work due to “the surrender of many memory and analytical functions” to software. The other concern was an Orwellian vision of “a federal panel, and satellite review boards, charged with the responsibility for selecting computer programs, for modifying existing programs in the light of experience, and for ensuring that the programs are constantly kept abreast of current knowledge.” He suggested that “Under this system, a handful of individuals, drawn largely from university centers and knowledgeable in the arcane arts of computing sciences as well as medicine, might thus emerge as a new elite.”
Have such hopes and fears come to pass? The details vary, and one of the important distinctions is between diagnosis and treatment.
In diagnosis, there is a single right answer. One might predict that this would lead to a high degree of centralization of expertise, but that has not been the case. Types of diagnostic decision support include:
Diagnostic algorithms come close to the 1970 vision in their standardization and their ability to be used by less skilled practitioners. Typically, algorithms are used for routine problems, such as sore throats, for which there is enough simplicity, experience, and constancy for the algorithms to perform well even when used by less specialized staff , such as nurse practitioners. However, for most conditions covered by diagnostic algorithms, patients visit physicians who do not use formal algorithms, and so far, there have been no major cost incentives pushing toward monolithic standardization.
Search tools tend to have secret algorithms and produce search results without revealing the reasoning behind the selection. Although this has some of the feel of Orwellian centralization, in practice, this isn’t the case, largely because search tools are rough instruments that miss many of the subtleties of diagnosis. For example, such tools are poor at collecting information about the frequency of findings in diseases, and as a result, the diagnostic searches are lacking in sensitivity and specificity. The search tools also poorly handle temporal information, such as onset and acuity, and have difficulty determining whether an article is referring to a finding present in a disease, absent in a disease, or present in a disease mentioned in the differential diagnosis. As a consequence, these tools produce output labeled as “search results,” since there is not enough detailed information to rank diseases into a differential diagnosis. Although the search process is somewhat secretive, the results are not presented as having a monolithic answer, and one doesn’t get the impression of being dictated to by a “new elite.” Also, due to the lack of precision in the results, search tools do not even come close to “de-skilling” diagnosis.
Diagnostic software is more ambitious in that it aims to provide a ranked differential diagnosis. Although this has much potential for centralization of information by a new elite, that has not turned out to be the case, in large part because doctors gravitate toward a more open model. This is the approach that we implemented at SimulConsult, using a “computational wiki” with doctor-initiated submissions to an open database, combined with peer review before changes are incorporated. So, in contrast to the 1970 fear that diagnostic software would create a centralized new elite, in practice, the result is more like Wikipedia, a model about as opposite to Orwell’s vision as one can imagine, since any physician can become part of the group assembling the information. This approach is part of a wider trend of information sharing referred to as “Health 2.0”. The second concern about decision support software has been of de-skilling medicine. This is a realistic concern, because diagnostic software can assist not only with identifying diagnoses, but also with identifying the most useful findings to check. However, flexible approaches tend to predominate; typically, many different useful findings are suggested, respecting the importance of the doctor’s judgment about which findings are likely to be reliable and useful. The overall effect of diagnostic software has been more to empower doctors, helping them make diagnoses of diseases beyond those with which they are most familiar.
Treatment differs from diagnosis in that there is not just one right answer. Types of treatment software include:
Choosing one particular protocol could seem over-centralized, because treatment advice has no unique answer and the protocol is a somewhat arbitrary set of procedures from an authority figure. However, in practice, use of treatment software hasn’t had that effect, in part because there is a multiplicity of sources of treatment information. The only element of compulsion comes from cost-based measures implemented by insurers, but in countries with a variety of insurance options, diversity of protocols results from the diversity of payers. Even a recent proposal by Senator John Kerry and former House Speaker Newt Gingrich for a new federal “institute for evidence-based medicine” suggests a diversity of inputs from doctors and the private sector, with at least some mixture of strategies rather than an implementation of the “new elite” vision of 1970.
Standardization of treatment has been applied primarily to the most straightforward treatment problems, as in many MinuteClinic treatments. In such situations, the standardization and the implementation by nurse practitioners seem quite appropriate.
Adoption of decision support software
The main problems holding back development of decision support tools are determining the best means of creating content, and developing strategies for “monetization” (effective revenue models). The basic problem with monetization has been that incentives often encourage doctors to add billable resources, such as lab tests and consults, rather than use additional knowledge to perform at a higher level. Nevertheless, a variety of incentives are being used to advance decision support software, including increased reimbursement for doctors using software that allows them to perform at higher levels, evidence-based criteria for approving diagnostic tests, education requirements, and risk management efforts. New “concierge-like” practice models may accelerate such trends.
In reconsidering the 1970 vision of medical decision support software, we have not yet moved toward the Orwellian version of medical information controlled by a new elite; indeed, we have moved more toward open Health 2.0 approaches that empower both physicians and patients. Doctors haven’t been reduced to de-skilled workers; the complexity of being a doctor has increased over the decades, due in large part to the increasingly sophisticated understanding of the biological basis of disease. However, we haven’t yet achieved the hope of the 1970 vision either—the widespread use of decision support software. That vision depends on four things: the right hardware, the right software, the right data, and the right ways of building into medicine proper incentives for quality. Anything that depends on getting all of those right is bound to take longer than initially expected.
Dr. Segal, is an MDNG Healthcare Advisory Board member, and the founder of SimulConsult, which makes decision-support software to assist with medical diagnosis.