Tag: AI

Humans Still Beat AI in Breast Cancer Screening

Source: National Cancer Institute

A review in The BMJ finds that humans still seem to beat AI when it comes to the accuracy of spotting possible cases of breast cancer during screening.

At this point, there is a lack of good quality evidence to support a policy of replacing human radiologists with artificial intelligence (AI) technology when screening for breast cancer, the researchers concluded.

A leading cause of death for among women the world over, mammography screening is a high volume, repetitive task for radiologists, and some cancers are not picked up.

Previous research has suggested that AI systems outperform humans and might soon be used instead of experienced radiologists. However, a recent review of 23 studies highlighted evidence gaps and concerns about the methods used.

To address this uncertainty, researchers reviewed 12 studies carried out since 2010. They found that, overall, the methods used in the studies were of poor quality, with low applicability to European or UK breast cancer screening programmes.

Three large studies involving nearly 80 000 women compared AI systems with the clinical decisions of the original radiologist. Of these, 1878 had screen detected cancer or interval cancer (cancer diagnosed in-between routine screening appointments) within 12 months of screening.

The majority (34 out of 36 or 94%) of AI systems in these three studies were less accurate than a single radiologist, and all were less accurate than the consensus of two or more radiologists, which is the standard practice in Europe.

In contrast, five smaller studies involving 1086 women reported that all of the AI systems evaluated were more accurate than a single radiologist. However, these were at high risk of bias and not replicated in larger studies.

In three studies, AI used as a pre-screen to triage which mammograms need to be examined by a radiologist and which do not screened out 53%, 45%, and 50% of women at low risk but also 10%, 4%, and 0% of cancers detected by radiologists.

The authors point to some study limitations such as excluding non-English studies, as well as the fast pace of AI development. Nevertheless, stringent study inclusion criteria along with rigorous and systematic evaluation of study quality suggests their conclusions are robust.

As such, the authors said: “Current evidence on the use of AI systems in breast cancer screening is a long way from having the quality and quantity required for its implementation into clinical practice.”

They added: “Well designed comparative test accuracy studies, randomized controlled trials, and cohort studies in large screening populations are needed which evaluate commercially available AI systems in combination with radiologists in clinical practice.”

Source: Medical Xpress

Spacesuit Tech Leads to Improved Patient Outcomes

A tech startup is pioneering wearable health technology derived from spacesuit technology.

Maarten Sierhuis, a NASA alum, commented to Rachna Dhamija, a tech veteran and his future cofounder saying, “If your dad would just wear a space suit, I could monitor him”. Both had ageing parents with health issues.

Having worked for 12 years as a senior research scientist at NASA, Sierhuis used sensors and artificial intelligence (AI) to monitor astronauts in space. When astronauts go on spacewalks, their spacesuits contain various sensors that monitor their vitals, with the data being sent to NASA and distributed to the flight surgeon, biomedical engineers, and others. The ground-based crew uses that information to guide its support efforts—perhaps a reminder to drink some water and avert dehydration, or to take a short break to lower heart rate. This technology—called the Brahms Intelligent Agent platform—was licensed to Ejenta from NASA. Now, hospitals and health systems are using it to help better their patient care.

“When we started the company, we just had a very strong conviction that our parents deserve the same level of care NASA provides its astronauts,” Dhamija said.

Ejenta integrates wearable and home sensors that gather data from patients with AI-driven virtual assistants. Using a chat function, patients can use the platform to exchange messages with these assistants, called “intelligent agents” by Ejenta, right from their homes. Clinicians can securely access patient information from the Ejenta platform to better inform their care decisions.

Advances in cloud computing enabled the technology to be adapted from space to the Earth. Ejenta’s founders use a cloud infrastructure to securely collect, store and analyse health data.

Ejenta—whose name is a Bengali slang term for “agents”— is one of them. The company, which was founded in 2012, originally focused on government-related work, including projects for NASA. However, in the last four years, Ejenta evolved into a digital health company. Dhamija said the company’s AI-driven technology is what makes Ejenta unique from other digital health startups.

“There are a lot of healthcare devices available to consumers, but what’s missing is AI and the automation that can turn this data into insights a doctor can use—actionable data to make care more preventative and more proactive,” she said.

Ejenta uses its NASA technology to take data from wearable Internet of Things (IoT) devices and at-home sensors to monitor a patient’s health. Patients can interact with their assistant via text or voice and ask questions like, “What medication do I need to take with breakfast?”and receive an appropriate answer.

A clinical trial with Ejenta by one of the country’s largest healthcare providers, saw heart failure readmissions dropped by 56%. Readmissions come are costly for both patients and the healthcare system, so this application can save considerable amounts of money as well as improving the patient’s quality of life. Ejenta, in separate clinical trials, also contributed to improved outcomes for women who had high-risk pregnancies, reducing risk for gestational diabetes, preterm birth and cesarean sections. These successes were made possible by years of difficult development.

“We had a big challenge adapting our solution, which was originally designed to monitor 12 astronauts in space, to scale up to support thousands of patients across a number of different customer types and a number of different health conditions while still being HIPAA compliant,” Dhamija said.

However, by leveraging Amazon Webs Servers (AWS) as its cloud provider, Ejenta was able to scale up. Dhamija said her team chose AWS because it offers both flexibility and scalability in a secure cloud environment, which is critical when dealing with healthcare data. Ejenta wanted a “cloud provider that had a reputation for providing HIPAA-compliant services our customers would trust,” she said.

Ejenta was part of the Alexa Accelerator, an Amazon programme to help companies incorporate voice technology into their innovations. Before entering the programme, Ejenta had used Alexa to support improved diabetes care management for patients. It continued this work during the accelerator.

“Alexa is one of the only voice-based solutions that gave us the ability to engage customers, whether it’s patients or their family, with voice and do it in a HIPAA-compliant way,” Dhamija said.

Ejenta’s participation in the accelerator led to its involvement in AWS Connections, a program that introduces startups to large organisations that have specific technological or business needs. Through this programme, Ejenta is developing a health and communication management system for astronauts in deep space to relay health informationa and communicate with their families.

“It’s translational, meaning it can be applied for both Earth and space,” Dhamija said. “If you look at some of the problems we face on Earth or space, they do inform each other, so the goal is to have our Earth-based work inform space, and vice versa.”

Source: Forbes

Social Cues Impacts on Human Decision-making in Emergencies

Man at the wheel of a car. Photo by why kei on Unsplash.

A study showed that, when participants in a simulated crash of an autonomous vehicle were told that others had chosen to crash into a wall to save pedestrians, their own willingness to do so went up by two-thirds.

As autonomous vehicles become more commonplace, and the need to program them for safety emerges, a better understanding of how humans react in such situations is needed. Study author Jonathan Gratch, the principal investigator for this project, and a computer scientist at the USC Institute for Creative Technologies, said that current models for humans in life-or-death situations, humans think differently to how they do in reality. There are no moral absolutes, rather ” it is more nuanced”.

Seeking to understand how humans make decisions in life-or-death situations and how to apply them to the programming of autonomous vehicles and robots, researchers presented a modified trolley problem to participants using a modified ‘trolley problem’.

The trolley problem is a classic hypothetical scenario psychologists use to investigate human decision-making. Essentially, it involves the decision to divert a tram to hit one person or to leave it on its track and hit five, and it has a number of variations. In one medical variation of the trolley problem, one person could be killed and their organs harvested to save five terminally ill patients — a choice that is overwhelmingly rejected.  

In three of four simulations presented to them, the participants had to choose whether to tell their autonomous vehicle to hit a wall, risking harm to themselves, or hit five pedestrians. The higher the likelihood of injury to pedestrians, the more likely the participants were to choose hitting the wall and risking self harm. The authors showed that in so doing, people balance the risk of injury to self against the potential of injury to others as a guideline.

In the fourth scenario, a social element was added, where participants were told that their peers had chosen to save the pedestrians. In this case, the proportion of participants electing to save the pedestrian went up from 30% to 50%.

However, Gratch there is a reverse as well: “Technically there are two forces at work. When people realize their peers don’t care, this pulls people down to selfishness. When they realize they care, this pulls them up.”

The researchers showed that using the trolley problem as a basis for decision-making is insufficient, as it fails to capture the complexity of human decision-making. The researchers also concluded that transparency in the programming of autonomous machines was important for the public, as well as human operators assuming control in the event of an emergency.

Source: News-Medical.Net

Journal information: de Melo, C. M., et al. (2021) Risk of Injury in Moral Dilemmas With Autonomous Vehicles. Frontiers in Robotics and AI. doi.org/10.3389/frobt.2020.572529.

Neural Network Matches Dermatologists’ Assessment of Skin Lesions

Researchers have developed an AI-based tool that can use smartphone camera pictures to spot suspicious pigmented lesions (SPLs) with an accuracy close to that of professional dermatologists.

Such technology would hardly put dermatologists out of work; on the contrary, there is a great need for readily available skin cancer screening. In the US, there are only 12 000 practising dermatologists, who would need to see over 27 000 patients each per year in order to screen the entire population for SPLs which could lead to cancer. Computer-aided diagnosis (CAD) has thus been developed over previous years to help assist in diagnosis, but thus far had failed to spot melanomas in a meaningful way. Such CAD programs only analyse individual SPLs, while dermatologists compare other lesions on the same patient to reach a diagnosis, called ‘ugly duckling’ criteria.

This shortcoming has been addressed in a new CAD system that uses convolutional deep neural networks (CDNNs) developed by researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Massachusetts Institute of Technology (MIT).

The new system was able to distinguish SPLs from non-suspicious lesions in photos of patients’ skin at ~90% accuracy, and established an ‘ugly duckling’ criteria which could match three dermatologists’ consensus 88% of the time.

“We essentially provide a well-defined mathematical proxy for the deep intuition a dermatologist relies on when determining whether a skin lesion is suspicious enough to warrant closer examination,” said first author Luis Soenksen, PhD, a Postdoctoral Fellow at the Wyss Institute who is also a Venture Builder at MIT. “This innovation allows photos of patients’ skin to be quickly analyzed to identify lesions that should be evaluated by a dermatologist, allowing effective screening for melanoma at the population level.”

The researchers used a database of 33 000 images to train the system, which also included background elements and non-skin elements. These extraneous elements were left in so that the CDNN would be able to use normal images taken by consumer-grade cameras. The images contained SPLs and non-suspicious skin lesions identified by three certified dermatologists.
The software then developed a ‘map’ of how far away a lesion was from the others in terms of similarity, giving an ‘ugly duckling’ criteria. To test the software, they used 135 photos from 68 patients, which assigned an ‘oddness’ score to each lesion. This was then compared to dermatologists’ assessments of those lesions, matching individual dermatologists 88% of the time and their consensus 86% of the time
“This high level of consensus between artificial intelligence and human clinicians is an important advance in this field, because dermatologists’ agreement with each other is typically very high, around 90%,” said co-author Jim Collins, PhD, of the Wyss Institute, who is also the Termeer Professor of Medical Engineering and Science at MIT. “Essentially, we’ve been able to achieve dermatologist-level accuracy in diagnosing potential skin cancer lesions from images that can be taken by anybody with a smartphone, which opens up huge potential for finding and treating melanoma earlier.”

Source: Medical Xpress

Journal information: “Using deep learning for dermatologist-level detection of suspicious pigmented skin lesions from wide-field images” Science Translational Medicine, 2021.

Speech Recognition AI Detects COVID in Coughs

An AI system originally developed at the Massachusetts Institute of Technology (MIT) to research speech patterns in people with Alzheimer’s was repurposed when the COVID pandemic hit to become an indicator for COVID in asymptomatic patients. 

“The sounds of talking and coughing are both influenced by the vocal cords and surrounding organs,” says research scientist Brian Subirana of MIT.
“This means that when you talk, part of your talking is like coughing, and vice versa. It also means that things we easily derive from fluent speech, AI can pick up simply from coughs, including things like the person’s gender, mother tongue, or even emotional state. There’s in fact sentiment embedded in how you cough.”

The system was based on a neural network that was trained on a thousand hours of human speech, then on a database on words spoken in different emotional states and finally a database of coughs. The result was a system that could detect a cough in an asymptomatic person with COVID with 97.1% accuracy. However, this is not a true test of COVID but an enhanced early indicator. The advantage of this technology is that it can be developed as an early warning system that can be incorporated into something as ubiquitous as a smartphone.

Source: Science Alert