Researchers have developed an AI-based tool that can use smartphone camera pictures to spot suspicious pigmented lesions (SPLs) with an accuracy close to that of professional dermatologists.
Such technology would hardly put dermatologists out of work; on the contrary, there is a great need for readily available skin cancer screening. In the US, there are only 12 000 practising dermatologists, who would need to see over 27 000 patients each per year in order to screen the entire population for SPLs which could lead to cancer. Computer-aided diagnosis (CAD) has thus been developed over previous years to help assist in diagnosis, but thus far had failed to spot melanomas in a meaningful way. Such CAD programs only analyse individual SPLs, while dermatologists compare other lesions on the same patient to reach a diagnosis, called ‘ugly duckling’ criteria.
This shortcoming has been addressed in a new CAD system that uses convolutional deep neural networks (CDNNs) developed by researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Massachusetts Institute of Technology (MIT).
The new system was able to distinguish SPLs from non-suspicious lesions in photos of patients’ skin at ~90% accuracy, and established an ‘ugly duckling’ criteria which could match three dermatologists’ consensus 88% of the time.
“We essentially provide a well-defined mathematical proxy for the deep intuition a dermatologist relies on when determining whether a skin lesion is suspicious enough to warrant closer examination,” said first author Luis Soenksen, PhD, a Postdoctoral Fellow at the Wyss Institute who is also a Venture Builder at MIT. “This innovation allows photos of patients’ skin to be quickly analyzed to identify lesions that should be evaluated by a dermatologist, allowing effective screening for melanoma at the population level.”
The researchers used a database of 33 000 images to train the system, which also included background elements and non-skin elements. These extraneous elements were left in so that the CDNN would be able to use normal images taken by consumer-grade cameras. The images contained SPLs and non-suspicious skin lesions identified by three certified dermatologists.
The software then developed a ‘map’ of how far away a lesion was from the others in terms of similarity, giving an ‘ugly duckling’ criteria. To test the software, they used 135 photos from 68 patients, which assigned an ‘oddness’ score to each lesion. This was then compared to dermatologists’ assessments of those lesions, matching individual dermatologists 88% of the time and their consensus 86% of the time
“This high level of consensus between artificial intelligence and human clinicians is an important advance in this field, because dermatologists’ agreement with each other is typically very high, around 90%,” said co-author Jim Collins, PhD, of the Wyss Institute, who is also the Termeer Professor of Medical Engineering and Science at MIT. “Essentially, we’ve been able to achieve dermatologist-level accuracy in diagnosing potential skin cancer lesions from images that can be taken by anybody with a smartphone, which opens up huge potential for finding and treating melanoma earlier.”
Source: Medical Xpress
Journal information: “Using deep learning for dermatologist-level detection of suspicious pigmented skin lesions from wide-field images” Science Translational Medicine, 2021.