Humans Still Beat AI in Breast Cancer Screening
A review in The BMJ finds that humans still seem to beat AI when it comes to the accuracy of spotting possible cases of breast cancer during screening.
At this point, there is a lack of good quality evidence to support a policy of replacing human radiologists with artificial intelligence (AI) technology when screening for breast cancer, the researchers concluded.
A leading cause of death for among women the world over, mammography screening is a high volume, repetitive task for radiologists, and some cancers are not picked up.
Previous research has suggested that AI systems outperform humans and might soon be used instead of experienced radiologists. However, a recent review of 23 studies highlighted evidence gaps and concerns about the methods used.
To address this uncertainty, researchers reviewed 12 studies carried out since 2010. They found that, overall, the methods used in the studies were of poor quality, with low applicability to European or UK breast cancer screening programmes.
Three large studies involving nearly 80 000 women compared AI systems with the clinical decisions of the original radiologist. Of these, 1878 had screen detected cancer or interval cancer (cancer diagnosed in-between routine screening appointments) within 12 months of screening.
The majority (34 out of 36 or 94%) of AI systems in these three studies were less accurate than a single radiologist, and all were less accurate than the consensus of two or more radiologists, which is the standard practice in Europe.
In contrast, five smaller studies involving 1086 women reported that all of the AI systems evaluated were more accurate than a single radiologist. However, these were at high risk of bias and not replicated in larger studies.
In three studies, AI used as a pre-screen to triage which mammograms need to be examined by a radiologist and which do not screened out 53%, 45%, and 50% of women at low risk but also 10%, 4%, and 0% of cancers detected by radiologists.
The authors point to some study limitations such as excluding non-English studies, as well as the fast pace of AI development. Nevertheless, stringent study inclusion criteria along with rigorous and systematic evaluation of study quality suggests their conclusions are robust.
As such, the authors said: “Current evidence on the use of AI systems in breast cancer screening is a long way from having the quality and quantity required for its implementation into clinical practice.”
They added: “Well designed comparative test accuracy studies, randomized controlled trials, and cohort studies in large screening populations are needed which evaluate commercially available AI systems in combination with radiologists in clinical practice.”
Source: Medical Xpress