How many physicians have had the experience of selecting an online CME program in part based on the high marks awarded by previous participants, only to find the program to be dull and inadequate?
In keeping with the informal “physician education” theme of the May issues of MDNG, I thought it would be useful to do a quick survey of some of the current research in order to determine whether online CME programs are an effective means of educating physicians, and if so, what qualities or features define an effective online learning experience.
There have been a number of studies that evaluate the effectiveness of particular online CME approaches and designs. There have also been several studies that compared the various approaches to determine relative efficacy, in an attempt to develop a better idea of what constitutes “best practices” when it comes to CME program design and delivery.
Writing in the October 2007 issue of Medical Teacher, Greg Ryan, PhD, and colleagues claim that “Online CME can be as effective as face-to-face teaching in the development of knowledge, skills, and attitudes for professional practice,” especially to the extent the design of the online program follows the tenets of Situated Learning Theory (SLT), which the authors describe as emphasizing “authentic” learning within a “community of practice,” facilitated by “an expert practitioner and supported by interaction and collaborative knowledge construction.”
Another necessary component of an effective online CME course identified by the authors is “cognitive apprenticeship,” a quality that emphasizes “the importance of activity in knowledge construction, and highlights the “situatedness of learning in a particular context.” An online CME program reinforces cognitive apprenticeship if it incorporates the following characteristics:
Online CME programs that provide learners with “flexible opportunities to engage in authentic, interactive, and self-directed learning activities, have been shown to not only facilitate participant engagement and collaboration,” Ryan and colleagues said, but also to “improve learners' acquisition of knowledge and skill and facilitate transfer beyond the initial context of learning.”
They cite several other studies to support this, including Curran and Fleet’s “A Review of Evaluation Outcomes of Web-based Continuing Medical Education,” published in the June 2005 issue of Medical Education; Mazmanian and Davis’ “Continuing Medical Education and the Physician as a Learner: Guide to the Evidence,” published in the September 4, 2002 issue of JAMA; and Harris and colleagues’ “Can Internet-based Education Improve Physician Confidence in Dealing With Domestic Violence?” published in the April 2002 issue of Family Medicine.
This and other studies confirm that programs that are more immersive, action-oriented, practical, and that facilitate interaction and problem-solving are definitely more effective, which confirms physician testimonials and other anecdotal evidence we’ve seen over the years. It makes sense that online CME programs that follow the ideal format outlined by researchers are a useful means of fulfilling physicians’ continuing education needs.
However, there are several problems and shortcomings identified in some of the studies that make it difficult to accurately measure the true effectiveness of this educational approach. One of the most commonly cited problems with existing CME research is that much of it relies on participant satisfaction data, rather than more objective means of measuring knowledge retention and application to everyday practice. Curran and Fleet note this challenge, reporting that they found “limited research demonstrating performance change in clinical practices,” and were able to find “no studies reported in the literature that demonstrated that Web-based CME was effective in influencing patient or health outcomes.” Ryan and colleague, writing two years after the Curran study, echo this sentiment, reporting that “studies that have evaluated the efficacy of web-based CME lack methodological rigor, predominantly relying on participant satisfaction data as an outcome measure or using a single-arm pretest post-test design with no control group,” which makes it “difficult to judge whether demonstrated improvement in clinician knowledge and/or skills is a result of mode of delivery or exposure to educational content.”
Relying on participants’ responses regarding their level of satisfaction with a just-completed online program raises several problems in terms of selection bias, subjectivity, and other factors that can skew perception of the merit of a particular program. How many physicians reading this have had the experience of selecting an online CME program in part based on the high marks awarded by previous participants, only to find the program to be dull and inadequate?
Do some readers actually prefer non-interactive text-based approaches that mimic the experience of old-fashioned journal-based CME articles? Are video lectures and/or slide presentations with corresponding audio tracks any better at conveying useful information online? Or do they too suffer from a lack of interactivity?
We’d like to know: Are physicians merely making due with the current crop of online CME, accepting sub-par learning experiences as the price to pay for the convenience and affordability of online CME? Also, are there specific online programs readers have found to be particularly effective? If so, what was the reason? E-mail firstname.lastname@example.org to share your thoughts.