Texas Physician Ebook Continuing Education

Perceptual errors account for a majority of misdiagnoses in radiology and can be rooted in faulty visual processing or, to a lesser extent, cognitive biases. Improving visual perception skills, which predominate the diagnostic process in radiology, requires methods of training different from those to improve clinical reasoning. Four studies evaluated the impact of educational interventions on perceptive skills, with three showing improvement in perceptive performance. The studies involved subjects early in their medical training, and each tested a different intervention to improve perceptive performance. A novel study by Goodman and Kelleher (2017) took 15 first-year radiology residents to an art gallery, where experts with experience in teaching fine art perception trained the residents on how to thoroughly analyze a painting. 18 The trainees were instructed to write down everything they could see in the painting, after which the art instructor showed the trainees how to identify additional items in the painting that they had not perceived. To test this intervention, the residents were given 15 radiographs pre-intervention and another 15 post- intervention and asked to identify the abnormalities. At baseline, the residents scored an average of 2.3 out of a maximum score of 15. After the art training, the residents’ scores significantly improved, with an average score of 6.3 (p<.0001), indicating that perception training may improve radiology residents’ abilities to identify abnormalities in radiographs. Another study evaluated different proportions of normal and abnormal radiographs in image training sets to determine the best case-mix for achieving higher perceptive performance. 19 For the intervention, the authors used three different 50- case training sets, which varied in their proportions of abnormal cases (30%, 50%, 70%). One hundred emergency medicine residents, pediatric residents, and pediatric emergency medicine fellows were randomized to use one of the training sets. After the intervention, all participants completed the same post-test. All three groups showed improvement after the intervention, but with varying sensitivity- specificity trade-offs. The group that received the lowest proportion (30%) of abnormal radiographs had a higher specificity and was more accurate with negative radiographs. The group that trained on the set with the highest proportion of abnormal radiographs (70%) detected more abnormalities when abnormalities were present, achieving higher sensitivity. These findings have significant implications for medical education, as it may be that case mix should beadjusted based on the desired sensitivity or specificity for a given examination type (e.g., screening exams vs. diagnostic test). The use of cognitive training interventions, such as reflective practice, may yield the greatest improvements for only the most complex diagnostic cases. This makes application of appropriate strategies in actual clinical settings difficult, as whether a case is complex is often not determined until after the

diagnostic process has begun. In addition, some of these teaching techniques, such as those using standardized patients or requiring development of simulations, are labor intensive and may not be generalizable. Peer review Peer review is the systematic and critical evaluation of performance by colleagues with similar competencies using structured procedures. Peer review in clinical settings has two recognized objectives: data collection and analysis to identify errors; and feedback with the intention of improving clinical performance and practice quality. It also serves to fulfill accreditation requirements, such as The Joint Commission requirement that all physicians who have been granted privileges at an organization undergo evaluation of and collect data relating to their performance, or the American College of Radiology physician peer review requirements for accreditation. When done systematically and fairly, peer review contributes to and derives from a culture of safety and learning. Peer review, when designed appropriately, has the potential to achieve patient safety goals by having an impact on care either directly at the time of testing (e.g., identifying and resolving the error before it affects the patient) or indirectly by improving physician practice through continual learning and feedback. Traditional peer review: random versus nonrandom selection Evaluation of professional practice, which can be accomplished through peer review, is a requirement for accreditation by organizations such as the American College of Radiology (ACR) and The Joint Commission, and recommended by professional associations such as the College of American Pathologists. The best-known example is that used in radiology, the ACR’s RADPEER program, which is a standardized process with a set number of cases targeted for review (typically 5%) and a uniform scoring system. The cases, which are originally interpreted images being used for comparison during a subsequent imaging exam by the reviewing “peer” radiologist, are randomly selected and scored. Scores are assigned based on the clinical significance of the discrepancy between the initial radiologist’s interpretation and the review radiologist’s interpretation: (1) concur with interpretation; (2) discrepancyin interpretation, correct interpretation is not ordinarily expected to be made (i.e., an understandable miss); and (3) discrepancy in interpretation and the correct interpretation should be made most of the time. Scores of 2 and 3 can be modified with an additional designation of (a) unlikely to be clinically significant or (b) likely to be clinically significant. Scores of 2b, 3a, or 3b are reviewed by a third party, typically a department chair, medical director, or quality assurance committee. Discrepancy rates can then be calculated for individual radiologists and used for comparison against peer groups or national benchmarks, and for improving practice.

Discrepancy rates are typically relatively low (range 0.8% - 3.8%% in a review of 6 studies of randomly- selected images. Double reading A common form of nonrandom peer review, particularly in radiology practice, is the use of double reading, in which a second clinician reviews a recently completed case. With this method the review is integrated into the diagnostic process rather than conduced retrospectively, allowing errors to be identified and resolved prior to a report being transmitted to the ordering provider or the patient. Geijer and Geijer (2018) reviewed 46 studies to identify the value of double reading in radiology. 20 The studies fell into two categories: those that used two radiologists of similar degree of sub- specialization (e.g., both neuroradiologists) and those that used a subspecialized radiologist only for the second review(e.g., general radiologist followed by hepatobiliary radiologist). Across both types of studies included in the review, double reading increased sensitivity at the expense of reduced specificity. In other words, double reading tended to identify more disease, while also identifying disease in cases that were actually negative (i.e., false positives). With discrepancy rates in studies between 26% and 37%, the authors suggest that double reading might be most impactful for trauma CT scans, for which there are a large number of images generated that need to be read quickly under stressful circumstances. The authors also suggest that it may be more efficient to use a single subspecialized radiologist rather than implement double reading, as using a subspecialist as a second reviewer introduced discrepancy rates up to 50%. This was thought to be a result of the subspecialist changing the initial reports and the bias introduced by having the subspecialist being the reference standard for the study. In the case of dual reading, Natarajan et al. (2017) found that the addition of the radiologist interpretation to the orthopedic interpretation of musculoskeletal films in pediatric orthopedic practice added clinically relevant information in 1% of the cases, yet misinterpreted 1.7% of the cases, potentially adding diagnostic errors into the process. 21 Murphy et al. (2010) found that double reading of colon CT scans increased the number of individuals falsely diagnosed with colon pathology. 22 The protocol found one extra-colonic cancer, but at the expense of five unnecessary endoscopic procedures. On the other hand, Harvey et al. (2016) identified that their group-oriented consensus review method had a secondary effect of fostering a culture of safety in their department, where radiologists feel comfortable identifying and openly discussing diagnostic errors. 23 This finding was supported by Itri et al. (2018), who recognized that peer learning conferences, during which diagnostic errors were reviewed, supported a culture of safety where clinicians learned from their mistakes. 24

79

Powered by