Tag: radiology

Is AI a Help or Hindrance to Radiologists? It’s Down to the Doctor

New research shows AI isn’t always a help for radiologists

Photo by Anna Shvets

One of the most touted promises of medical artificial intelligence tools is their ability to augment human clinicians’ performance by helping them interpret images such as X-rays and CT scans with greater precision to make more accurate diagnoses.

But the benefits of using AI tools on image interpretation appear to vary from clinician to clinician, according to new research led by investigators at Harvard Medical School, working with colleagues at MIT and Stanford.

The study findings suggest that individual clinician differences shape the interaction between human and machine in critical ways that researchers do not yet fully understand. The analysis, published in Nature Medicine, is based on data from an earlier working paper by the same research group released by the National Bureau of Economic Research.

In some instances, the research showed, use of AI can interfere with a radiologist’s performance and interfere with the accuracy of their interpretation.

“We find that different radiologists, indeed, react differently to AI assistance – some are helped while others are hurt by it,” said co-senior author Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.

“What this means is that we should not look at radiologists as a uniform population and consider just the ‘average’ effect of AI on their performance,” he said. “To maximize benefits and minimize harm, we need to personalize assistive AI systems.”

The findings underscore the importance of carefully calibrated implementation of AI into clinical practice, but they should in no way discourage the adoption of AI in radiologists’ offices and clinics, the researchers said.

Instead, the results should signal the need to better understand how humans and AI interact and to design carefully calibrated approaches that boost human performance rather than hurt it.

“Clinicians have different levels of expertise, experience, and decision-making styles, so ensuring that AI reflects this diversity is critical for targeted implementation,” said Feiyang “Kathy” Yu, who conducted the work while at the Rajpurkar lab with co-first author on the paper with Alex Moehring at the MIT Sloan School of Management.

“Individual factors and variation would be key in ensuring that AI advances rather than interferes with performance and, ultimately, with diagnosis,” Yu said.

AI tools affected different radiologists differently

While previous research has shown that AI assistants can, indeed, boost radiologists’ diagnostic performance, these studies have looked at radiologists as a whole without accounting for variability from radiologist to radiologist.

In contrast, the new study looks at how individual clinician factors – area of specialty, years of practice, prior use of AI tools – come into play in human-AI collaboration.

The researchers examined how AI tools affected the performance of 140 radiologists on 15 X-ray diagnostic tasks – how reliably the radiologists were able to spot telltale features on an image and make an accurate diagnosis. The analysis involved 324 patient cases with 15 pathologies: abnormal conditions captured on X-rays of the chest.

To determine how AI affected doctors’ ability to spot and correctly identify problems, the researchers used advanced computational methods that captured the magnitude of change in performance when using AI and when not using it.

The effect of AI assistance was inconsistent and varied across radiologists, with the performance of some radiologists improving with AI and worsening in others.

AI tools influenced human performance unpredictably

AI’s effects on human radiologists’ performance varied in often surprising ways.

For instance, contrary to what the researchers expected, factors such how many years of experience a radiologist had, whether they specialised in thoracic, or chest, radiology, and whether they’d used AI readers before, did not reliably predict how an AI tool would affect a doctor’s performance.

Another finding that challenged the prevailing wisdom: Clinicians who had low performance at baseline did not benefit consistently from AI assistance. Some benefited more, some less, and some none at all. Overall, however, lower-performing radiologists at baseline had lower performance with or without AI. The same was true among radiologists who performed better at baseline. They performed consistently well, overall, with or without AI.

Then came a not-so-surprising finding: More accurate AI tools boosted radiologists’ performance, while poorly performing AI tools diminished the diagnostic accuracy of human clinicians.

While the analysis was not done in a way that allowed researchers to determine why this happened, the finding points to the importance of testing and validating AI tool performance before clinical deployment, the researchers said. Such pre-testing could ensure that inferior AI doesn’t interfere with human clinicians’ performance and, therefore, patient care.

What do these findings mean for the future of AI in the clinic?

The researchers cautioned that their findings do not provide an explanation for why and how AI tools seem to affect performance across human clinicians differently, but note that understanding why would be critical to ensuring that AI radiology tools augment human performance rather than hurt it.

To that end, the team noted, AI developers should work with physicians who use their tools to understand and define the precise factors that come into play in the human-AI interaction.

And, the researchers added, the radiologist-AI interaction should be tested in experimental settings that mimic real-world scenarios and reflect the actual patient population for which the tools are designed.

Apart from improving the accuracy of the AI tools, it’s also important to train radiologists to detect inaccurate AI predictions and to question an AI tool’s diagnostic call, the research team said. To achieve that, AI developers should ensure that they design AI models that can “explain” their decisions.

“Our research reveals the nuanced and complex nature of machine-human interaction,” said study co-senior author Nikhil Agarwal, professor of economics at MIT. “It highlights the need to understand the multitude of factors involved in this interplay and how they influence the ultimate diagnosis and care of patients.”

Source: Harvard Medical School

Joint Statement Says Prior Radiation Should not Affect Decisions to Image

Photo by National Cancer Institute on Unsplash

Previous radiation exposure should not be considered when assessing the clinical benefit of radiological exams, according to a statement by three scientific groups representing medical physicists, radiologists, and health physicists.

Medical radiation exposure is a hot topic. People receive average annual background radiation levels of around 3 mSv; exposure from a chest X-ray is about 0.1 mSv, and exposure from a whole-body CT scan is about 10 mSv. The annual radiation limit for nuclear workers is 20mSv.

The American Association of Physicists in Medicine, along with the American College of Radiology and the Health Physics Society, issued a joint statement opposing cumulative radiation dose limits for patient imaging, saying that there could be negative impacts on patient care. The statement opposes the position taken by several organisations and recently published papers.

“It is the position of the American Association of Physicists in Medicine (AAPM), the American College of Radiology (ACR), and the Health Physics Society (HPS) that the decision to perform a medical imaging exam should be based on clinical grounds, including the information available from prior imaging results, and not on the dose from prior imaging-related radiation exposures,” the statement reads.

“AAPM has long advised, as recommended by the International Commission on Radiological Protection (ICRP), that justification of potential patient benefit and subsequent optimization of medical imaging exposures are the most appropriate actions to take to protect patients from unnecessary medical exposures. This is consistent with the foundational principles of radiation protection in medicine, namely that patient radiation dose limits are inappropriate for medical imaging exposures.

“Therefore, the AAPM recommends against using dose values, including effective dose, from a patient’s prior imaging exams for the purposes of medical decision-making. Using quantities such as cumulative effective dose may, unintentionally or by institutional or regulatory policy, negatively impact medical decisions and patient care.

“This position statement applies to the use of metrics to longitudinally track a patient’s dose from medical radiation exposures and infer potential stochastic risk from them. It does not apply to the use of organ-specific doses for purposes of evaluating the onset of deterministic effects (e.g., absorbed dose to the eye lens or skin) or performing epidemiological research.”

The Radiological Society of North America also endorses the position.

The AAPM emphasises the importance of patient safety in their position. Radiation usage must be both justified and optimised and benefits should outweigh the risks.

“This statement is an important reminder that patients may receive substantial clinical benefit from imaging exams,” said James Dobbins, AAPM President. “While we want to see prudent use of radiation in medical imaging, and many of our scientific members are working on means of reducing overall patient radiation dose, we believe it is an important matter of patient safety and clinical care that decisions on the use of imaging exams be made solely on the presenting clinical need and not on prior radiation dose.

“AAPM is pleased to partner with our fellow societies—the American College of Radiology and the Health Physics Society—to bring a broadly shared perspective on the important issue of whether previous patient radiation exposure should play a role in future medical decision making.”

The AAPM cites the International Commission on Radiological Protection, which stresses that setting radiation exposure limits to patients is not appropriate. This is partly due to a lack of standardised dose estimates.

The position only addresses stochastic risks from radiation exposure, which are chance effects whose risk for a given imaging exam, like cancer,is unrelated to the amount of prior radiation. Deterministic effects, incremental, direct exposure responses, such as skin damage, result from different biological mechanisms and are not included.

The AAPM compiled a list of answers to frequently asked questions on the topic of medical radiation safety along with references to research papers which support the organisation’s position.

Source: News-Medical.Net

Optimised Scheduling Algorithm Cuts Delays for MRI Scans

A team of researchers from Dartmouth Engineering and Philips have developed an optimised scheduling algorithm that significantly cuts the waiting time of patients for MRI at Lahey Hospital in Massachusetts, cutting overall associated costs by 23%.

“Excellence in service and positive patient experiences are a primary focus for the hospital. We continuously monitor various aspects of patient experiences and one key indicator is patient wait times,” said Christoph Wald, professor and chair, Department of Radiology, Lahey Hospital, Tufts University Medical School. With a goal of wanting to improve patient wait times, we worked with data science researchers at Philips and Dartmouth to help identify levers for improvement that might be achieved without impeding access.”

Exam waiting times can be stressful for patients, depending on the perceived value of the visit, and the associated costs of a delay to the patient.

Before the new algorithm, the average outpatient’s waiting time at the hospital was 54 minutes. The researchers found that the problem was a complicated scheduling system, which must cater to emergency room patients, inpatients, and outpatients; while other appointments are relatively inflexible, inpatient exams usually can be delayed if necessary.
“By analysing the patient data, we found that delays were prominent because the schedule was not optimal,” explained first author Yifei Sun, a Dartmouth Engineering PhD candidate. “This research uses optimisation and simulation tools to help the MRI centres of Lahey Hospital better plan their schedule to reduce overall cost, which includes patient waiting time.”

After identifying sources of delays, the researchers then created a mathematical model which optimised the length of each exam slot, and then worked in inpatient exams. Then they created an algorithm which cut down on the waiting time with its associated costs for outpatients, idle equipment time, employee overtime, and cancelled inpatient exams.

“This iterative improvement process did result in measurable improvements of patient wait times,” said Prof Wald. “The construction and use of a simulation model have been instrumental in educating the Lahey team about the benefits of dissecting workflow components to arrive at an optimised process outcome. We have extended this approach to identify bottlenecks in our interventional radiology workflow and to add additional capacity under the constraints of staffing schedules.”

The researchers believe that this solution may have great applicability, as the problem is common to mid-sized hospitals.

“We also provided suggestions for hospitals that don’t have optimisation tools or have different priorities, such as patient waiting times or idle machine times,” said Sun, who worked on the paper with her advisor Vikrant Vaze, the Stata Family Career Development Associate Professor of Engineering at Dartmouth.

Source: News-Medical.Net

Journal information: Sun, Y., et al. (2021) Stochastic programming for outpatient scheduling with flexible inpatient exam accommodation. Health Care Management Science. doi.org/10.1007/s10729-020-09527-z.