Category: Lab Tests and Imaging

AI Screening Could Boost Survival Rate for Hepatocellular Carcinoma from 20% to 90%

Photo by National Cancer Institute on Unsplash

A breakthrough study published in The American Journal of Pathology describes a new machine-learning model that may improve accuracy in early diagnosis of hepatocellular carcinoma and monitoring the impact of treatment.

Early diagnosis of hepatocellular carcinoma (HCC) – one of the most fatal malignancies – is crucial to improve patient survival. In this breakthrough study, investigators report on the development of a serum fusion-gene machine-learning model. This important screening tool may increase the five-year survival rate of patients with HCC from 20% to 90% because of its improved accuracy in early diagnosis of HCC and monitoring the impact of treatment.

HCC is the most common form of liver cancer and accounts for around 90% of cases. Currently, the most common screening test for the HCC biomarker, serum alpha-foetal protein, is not always accurate, and up to 60% of liver cancers are only diagnosed in advanced stages, resulting in a survival rate of only around 20%. 

Lead investigator Jian-Hua Luo, MD, PhD, Department of Pathology, High Throughput Genome Center, and Pittsburgh Liver Research Center, University of Pittsburgh School of Medicine, explained: “Early diagnosis of liver cancer helps save lives. However, most liver cancers occur insidiously and without many symptoms. This makes early diagnosis challenging. What we need is a cost-effective, accurate, and convenient test to screen early-stage liver cancer in human populations. We wanted to explore if a machine-learning approach could be used to increase the accuracy of screening for HCC based on the status of the fusion genes.”

In the search for a more effective and efficient diagnostic tool to predict non-HCC and HCC cases, investigators analysed a panel of nine fusion transcripts in serum samples from 61 patients with HCC and 75 patients with non-HCC conditions using real-time quantitative reverse transcription PCR (RT-PCR). Seven of the nine fusions were frequently detected in HCC patients. The researchers generated machine-learning models based on serum fusion-gene levels to predict HCC in the training cohort, using the leave-one-out cross-validation approach.  

A four fusion gene logistic regression model produced an accuracy of 83% to 91% in predicting the occurrence of HCC. When combined with serum alpha-foetal protein, the two-fusion gene plus alpha-foetal protein logistic regression model produced 95% accuracy for all the cohorts. Furthermore, quantification of fusion gene transcripts in the serum samples accurately assessed the impact of the treatment and was able to monitor for the recurrence of the cancer. 

Dr. Luo commented, “The fusion gene machine-learning model significantly improves the early detection rate of HCC over the serum alpha-fetal protein alone. It may serve as an important tool in screening for HCC and in monitoring the impact of HCC treatment. This test will find patients who are likely to have HCC.”

Dr. Luo concluded, “Early treatment of liver cancer has a 90% five-year survival rate, while late treatment has only 20%. The alternative to this test is to subject every individual with some risk of liver cancer to imaging analysis every six months, which is very costly and ineffective. In addition, when imaging results are ambiguous, this test will help to differentiate malignant versus benign lesions.”

Source: Elsevier

New Biomarker Database for Astronaut Health may be Useful to Earthlings

Photo: Pixabay CC0

As space travel becomes more frequent, a new biomarker tool was developed by an international team of researchers to help improve the growing field of aerospace medicine and the health of astronauts.

Dr Guy Trudel (Professor in the Faculty of Medicine), Odette Laneuville (Associate Professor, Faculty of Science, and Director of the Biomedical Sciences) and Dr Martin Pelchat (Associate Professor in the Department of Biochemistry, Microbiology and Immunology) are among the contributors to an international study led by Eliah Overbey of Weill Cornell Medicine and the University of Austin. Published today in Nature it introduces the Space Omics and Medical Atlas (SOMA), a database of integrated data and sample repository from a diverse range of space missions, including from SpaceX and NASA.

Space travel creates cellular, molecular, and physiological shifts in astronauts. SOMA is expected to provide a much necessary biomedical profiling that can help tease out the short and long-term health impacts of spaceflight. This will bring needed health monitoring, risk mitigation, and countermeasures baseline data for upcoming lunar, Mars, and exploration-class missions. It is meant to help keep astronauts and space travellers alive and healthy.

It may also have some intended use here on Earth.

“This represents a breakthrough in the study of human adaptation and life in space. Since many of the changes in astronaut in space resemble those of people who are immobile in bed, these studies can be clinically relevant. The data are therefore important for future space exploration while also providing a correlation to people on Earth with limited mobility or who are bedridden before their rehabilitation,” says Dr Trudel, a rehabilitation physician and researcher at The Ottawa Hospital who has focused on space travel and its effects on the human immune system.

Highlights of the study, include:

  • The Atlas includes extensive molecular and physiological profiles encompassing genomics, epigenomics, transcriptomics, proteomics, metabolomics, and microbiome data sets, which reveal some consistent features across missions.
  • Samples were taken pre-flight, during, post-flight and throughout the recovery period.
  • Comprehensive profile of the physiological changes of the I4 crew (ages 29, 38, 42, 51) and 13 unique biospecimen sample types were collected and processed.
  • 2911 samples were banked with over 1000 samples processed for sequencing, imaging, and biochemical analysis creating the first-ever aerospace medicine biobank.
  • The SOMA resource represents an over 10-fold increase in total publicly available human space omics data.

“The University of Ottawa’s Faculty of Medicine, its Faculty of Science, and The Ottawa Hospital’s Bone and Joint Research laboratory have a long history of contributions and successes in studying human adaptation to space. They also involve students from different programs, providing a unique learning experience in both bone and joint health, and in the rapidly developing field of aerospace medicine,” adds Dr Trudel.

Source: University of Ottawa

New Blood Test for Ischaemic Stroke is a ‘Game-changer’

Ischaemic and haemorrhagic stroke. Credit: Scientific Animations CC4.0

A new study led by investigators from Brigham and Women’s Hospital has developed a new test by combining blood-based biomarkers with a clinical score to identify patients experiencing large vessel occlusion (LVO) stroke with high accuracy. Their results are published in the journal Stroke: Vascular and Interventional Neurology.

“We have developed a game-changing, accessible tool that could help ensure that more people suffering from stroke are in the right place at the right time to receive critical, life-restoring care,” said senior author Joshua Bernstock, MD, PhD, MPH, a clinical fellow in the Department of Neurosurgery at Brigham and Women’s Hospital.

Most strokes are ischaemic, in which blood flow to the brain is obstructed. LVO strokes are an aggressive type of ischaemic stroke that occurs when an obstruction occurs in a major artery in the brain, causing brain cells to rapidly die off from lack of oxygen. Major medical emergencies, LVO strokes require the swift treatment with mechanical thrombectomy, a surgical procedure that retrieves the blockage.

“Mechanical thrombectomy has allowed people that otherwise would have died or become significantly disabled be completely restored, as if their stroke never happened,” said Bernstock. “The earlier this intervention is enacted, the better the patient’s outcome is going to be. This exciting new technology has the potential to allow more people globally to get this treatment faster.”

The research team previously targeted two specific proteins found in capillary blood, one called glial fibrillary acidic protein (GFAP), which is also associated with brain bleeds and traumatic brain injury, and one called D-dimer. In this study, they demonstrated that the levels of these blood-based biomarkers combined with field assessment stroke triage for emergency destination (FAST-ED) scores could identify LVO ischaemic strokes while ruling out other conditions such as bleeding in the brain. Brain bleeds cause similar symptoms to LVO stroke, making them hard to distinguish from one another in the field, yet treatment for each is vastly different.

In this prospective, observational diagnostic accuracy study, the researchers looked at data from a cohort of 323 patients coded for stroke in Florida between May 2021 and August 2022. They found that combining the levels of the biomarkers GFAP and D-dimer with FAST-ED data less than six hours from the onset of symptoms allowed the test to detect LVO strokes with 93% specificity and 81% sensitivity. Other findings included that the test ruled out all patients with brain bleeds, suggesting that it may also eventually be used to detect intracerebral haemorrhage in the field.

Bernstock’s team also sees promising potential future use of this accessible diagnostic tool in low- and middle-income countries, where advanced imaging is not always available. It might also be useful in assessing patients with traumatic brain injuries. Next, they are carrying out another prospective trial to measure the test’s performance when used in an ambulance. They have also designed an interventional trial that leverages the technology to expedite the triage of stroke patients by having them bypass standard imaging and move directly to intervention.

“In stroke care, time is brain,” Bernstock said. “The sooner a patient is put on the right care pathway, the better they are going to do. Whether that means ruling out bleeds or ruling in something that needs an intervention, being able to do this in a prehospital setting with the technology that we built is going to be truly transformative.

Source: Brigham and Women’s Hospital

Flexible Microdisplay Enables Real-time Visualisation in Neurosurgery

The device represents a huge leap ahead guide neurosurgeons with visualised brain activity

The device’s LEDs can light up in several colors. This allows surgeons to see which areas they need to operate on. It allows them to track brain states during surgery, including the onset of epileptic seizures. Credit: UCSF

A thin film that combines an electrode grid and LEDs can both track and produce a visual representation of the brain’s activity in real-time during surgery-a huge improvement over the current state of the art. The device is designed to provide neurosurgeons visual information about a patient’s brain to monitor brain states during surgical interventions to remove brain lesions including tumours and epileptic tissue.

The team behind the device describes their work in the journal Science Translational Medicine.

Each LED in the device represents the activity of a few thousand neurons. In a series of proof-of-concept experiments in rodents and large non-primate mammals, researchers showed that the device can effectively track and display neural activity in the brain corresponding to different areas of the body. In this case, the LEDs developed by the team light up red in the areas that need to be removed by the surgeon. Surrounding areas that control critical functions and should be avoided show up in green.

The study also showed that the device can visualise the onset and map the propagation of an epileptic seizure on the surface of the brain. This would allow physicians to isolate the ‘nodes’ of the brain that are involved in epilepsy. It also would allow physicians to deliver necessary treatment by removing tissue or by using electrical pulses to stimulate the brain.

“Neurosurgeons could see and stop a seizure before it spreads, view what brain areas are involved in different cognitive processes, and visualise the functional extent of tumour spread. This work will provide a powerful tool for the difficult task of removing a tumour from the most sensitive brain areas,” said Daniel Cleary, one of the study’s coauthors, a neurosurgeon and assistant professor at Oregon Health and Science University.

The device was conceived and developed by a team of engineers and physicians from University of California San Diego and Massachusetts General Hospital (MGH) and was led by Shadi Dayeh, the paper’s corresponding author and a professor in the Department of Electrical and Computer Engineering at UC San Diego.

Protecting critical brain functions

During brain surgery, physicians need to map brain function to define which areas of the organ control critical functions and can’t be removed. Currently, neurosurgeons work with a team of electrophysiologists during the procedure. But that team and their monitoring equipment are located in a different part of the operating room.

Brain areas that need to be protected and those that need to be operated on are either marked by electrophysiologists on a paper that is brought to the surgeon or communicated verbally to the surgeon, who then places sterile papers on the brain surface to mark these regions.

“Both are inefficient ways of communicating critical information during a procedure, and could impact its outcomes,” said Dr Angelique Paulk of MGH, who is a co-author and co-inventor of the technology.

In addition, the electrodes currently used to monitor brain activity during surgery do not produce detailed fine grained data. So surgeons need to keep a buffer zone, known as resection margin, of 5 to 7mm around the area they are removing inside the brain.

This means that they might leave some harmful tissue in. The new device provides a level of detail that would shrink this buffer zone to less than 1mm.

“We invented the brain microdisplay to display with precision critical cortical boundaries and to guide neurosurgery in a cost-effective device that simplifies and reduces the time of brain mapping procedures,” said Shadi Dayeh, the paper’s corresponding author and a professor in the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering.

Researchers installed the LEDs on top of another innovation from the Dayeh lab, the platinum nanorod electrode grid (PtNRGrid). Using the PtNRGrids since 2019, Dayeh’s team pioneered human brain and spinal cord mapping with thousands of channels to monitor brain neural activity.

They reported early safety and effectiveness results in a series of articles in Science Translational Medicine in 2022 in tens of human subjects.

(New sensor grids record human brain signals with record breaking resolution and Microelectrode array can enable safer spinal cord surgery) — ahead of Neuralink and other companies in this space.

The PtNRGrid also includes perforations, which enable physicians to insert probes to stimulate the brain with electrical signals, both for mapping and for therapy.

How it’s made

The display uses gallium nitride-based micro-LEDs, bright enough to be seen under surgical lights. The two models built measures 5mm or 32mm on a side, with 1024 or 2048 LEDs. They capture brain activity at 20 000 samples a second, enabling .

“This enables precise and real-time displays of cortical dynamics during critical surgical interventions,” said Youngbin Tchoe, the first author and co-inventor, formerly a postdoc in the Dayeh group at UC San Diego and now an assistant professor at Ulsan National Institute of Science and Technology.

In addition to the LEDs, the device includes acquisition and control electronics as well as software drivers to analyse and project cortical activity directly from the surface of the brain.

“The brain iEEG-microdisplay can impressively both record the activity of the brain to a very fine degree and display this activity for a neurosurgeon to use in the course of surgery. We hope that this device will ultimately lead to better clinical outcomes for patients with its ability to both reveal and communicate the detailed activity of the underlying brain during surgery,” said study coauthor Jimmy Yang, a neurosurgeon and assistant professor at The Ohio State University.

Next steps

Dayeh’s team is working to build a microdisplay that will include 100 000 LEDs, with a resolution equivalent to that of a smartphone screen – for a fraction of the cost of a high-end smartphone. Each LED in those displays would reflect the activity of a few hundred neurons.

These brain microdisplays would also include a foldable portion. This would allow surgeons to operate within the foldable portion and monitor the impact of the procedure as the other, unfolded portion of the microdisplay shows the status of the brain in real time.

Researchers are also working on one limitation of the study – the close proximity of the LED sensors and the PtNRGrids led to a slight interference and noise in the data.

The team plans to build customised hardware to change the frequency of the pulses that turn on the LEDs to make it easier to screen out that signal, which is not relevant to the brain’s electrical activity.

Source: University of California San Francisco

Could Diamond Dust Replace Gadolinium in MRI?

Photo by Mart Production on Pexels

An unexpected discovery surprised a scientist at the Max Planck Institute for Intelligent Systems in Stuttgart: nanometre-sized diamond particles, which were intended for a completely different purpose, shone brightly in a magnetic resonance imaging experiment – outshining the actual contrast agent, the heavy metal gadolinium.

The researchers, publishing their serendipitous discovery in Advanced Materials, believe that diamond nanoparticles, in addition to their use in drug delivery to treat tumour cells, might one day become a novel MRI contrast agent.

While the discovery of diamond dust’s potential as a future MRI contrast agent may never be considered a turning point in science history, its signal-enhancing properties are nevertheless an unexpected finding which may open-up new possibilities: diamond dust glows brightly even after days of being injected.

Perhaps it could replace gadolinium, which has been used in clinics to enhance the brightness of tissues to detect tumours, inflammation, or vascular abnormalities for more than 30 years. But when injected into a patient’s bloodstream, gadolinium travels not only to tumour tissue but also to surrounding healthy tissue. It is retained in the brain and kidneys, persisting months to years after the last administration and its long-term effects are not yet known. Gadolinium also causes a number of other side effects, and the search for an alternative has been going on for years.

Serendipity often advances science

Could diamond dust, a carbon-based material, become a well-tolerable alternative because of an unexpected discovery made in a laboratory at the Max Planck Institute for Intelligent Systems in Stuttgart?

Dr Jelena Lazovic Zinnanti was working on an experiment using nanometre-sized diamond particles for an entirely different purpose. The research scientist, who heads the Central Scientific Facility Medical Systems at MPI-IS, was surprised when she put the 3–5nm particles into tiny drug-delivery capsules made of gelatin. She wanted these capsules to rupture when exposed to heat. She assumed that diamond dust, with its high heat capacity, could help.

“I had intended to use the dust only to heat up the drug carrying capsules,” Jelena recollects.

“I used gadolinium to track the dust particles’ position. I intended to learn if the capsules with diamonds inside would heat up better. While performing preliminary tests, I got frustrated, because gadolinium would leak out of the gelatin – just as it leaks out of the bloodstream into the tissue of a patient. I decided to leave gadolinium out. When I took MRI images a few days later, to my surprise, the capsules were still bright. Wow, this is interesting, I thought! The diamond dust seemed to have better signal enhancing properties than gadolinium. I hadn’t expected that.”

Jelena took these findings further by injecting the diamond dust into live chicken embryos. She discovered that while gadolinium diffuses everywhere, the diamond nanoparticles stayed in the blood vessels, didn’t leak out and later shone brightly in the MRI, just as they had done in the gelatin capsules.

While other scientists had published papers showing how they used diamond particles attached to gadolinium for magnetic resonance imaging, no one had ever shown that diamond dust itself could be a contrast agent. Two years later, Jelena became the lead author of a paper now published in Advanced Materials.

“Why the diamond dust shines bright in our MRI still remains a mystery to us,” says Jelena.

She can only assume the reason is the dust’s magnetic properties: “I think the tiny particles have carbons that are slightly paramagnetic. The particles may have a defect in their crystal lattice, making them slightly magnetic. That’s why they behave like a T1 contrast agent such as gadolinium. Additionally, we don’t know whether diamond dust could potentially be toxic, something that needs to be carefully examined in the future.”

Source: Max Planck Institute for Intelligent Systems

Researchers Demonstrate the Effect of Neurochemicals on fMRI Readings

Photo by Fakurian Design on Unsplash

The brain is an incredibly complex and active organ that uses electricity and chemicals to transmit and receive signals between its sub-regions. Researchers have explored various technologies to directly or indirectly measure these signals to learn more about the brain. Functional magnetic resonance imaging (fMRI), for example, allows them to detect brain activity via changes related to blood flow.

Yen-Yu Ian Shih, PhD, professor of neurology and associate director of UNC’s Biomedical Research Imaging Center, and his fellow lab members have long been curious about how neurochemicals in the brain regulate and influence neural activity, blood flow, and subsequently, fMRI measurement in the brain.

A new study by the lab has confirmed their suspicions that fMRI interpretation is not as straightforward as it seems.

“Neurochemical signalling to blood vessels is less frequently considered when interpreting fMRI data,” said Shih, who also leads the Center for Animal MRI. “In our study on rodent models, we showed that neurochemicals, aside from their well-known signalling actions to typical brain cells, also signal to blood vessels, and this could have significant contributions to fMRI measurements.”

Their findings, published in Nature Communications, stem from the installation and upgrade of two 9.4-Tesla animal MRI systems and a 7-Tesla human MRI system at the Biomedical Research Imaging Center.

When activity in neurons increases in a specific brain region, blood flow and oxygen levels increase in the area, usually proportionate to the strength of neural activity. Researchers decided to use this phenomenon to their advantage and eventually developed fMRI techniques to detect these changes in the brain.

For years, this method has helped researchers better understand brain function and influenced their knowledge about human cognition and behaviour. The new study from Shih’s lab, however, demonstrates that this well-established neuro-vascular relationship does not apply across the entire brain because cell types and neurochemicals vary across brain areas.

Shih’s team focused on the striatum, a region deep in the brain involved in cognition, motivation, reward, and sensorimotor function, to identify the ways in which certain neurochemicals and cell types in the brain region may be influencing fMRI signals.

For their study, Shih’s lab controlled neural activity in rodent brains using a light-based technique, while measuring electrical, optical, chemical, and vascular signals to help interpret fMRI data. The researchers then manipulated the brain’s chemical signalling by injecting different drugs into the brain and evaluated how the drugs influenced the fMRI responses.

They found that in some cases, neural activity in the striatum went up, but the blood vessels constricted, causing negative fMRI signals. This is related to internal opioid signaling in the striatum. Conversely, when another neurochemical, dopamine, predominated signaling in striatum, the fMRI signals were positive.

“We identified several instances where fMRI signals in the striatum can look quite different from expected,” said Shih. “It’s important to be mindful of underlying neurochemical signaling that can influence blood vessels or perivascular cells in parallel, potentially overshadowing the fMRI signal changes triggered by neural activity.”

Members of Shih’s lab, including first- and co-authors Dominic Cerri, PhD, and Lindsey Walton, PhD, travelled to the University of Sussex in the United Kingdom, where they were able to perform experiments and further demonstrate the opioid’s vascular effects.

They also collected human fMRI data at UNC’s 7-Tesla MRI system and collaborated with researchers at Stanford University to explore possible findings using transcranial magnetic stimulation, a procedure that uses magnetic fields to stimulate the human brain.

By better understanding fMRI signaling, basic science researchers and physician scientists will be able to provide more precise insights into neural activity changes in healthy brains, as well as in cases of neurological and neuropsychiatric disorders.

Source: UNC School of Medicine

Is AI a Help or Hindrance to Radiologists? It’s Down to the Doctor

New research shows AI isn’t always a help for radiologists

Photo by Anna Shvets

One of the most touted promises of medical artificial intelligence tools is their ability to augment human clinicians’ performance by helping them interpret images such as X-rays and CT scans with greater precision to make more accurate diagnoses.

But the benefits of using AI tools on image interpretation appear to vary from clinician to clinician, according to new research led by investigators at Harvard Medical School, working with colleagues at MIT and Stanford.

The study findings suggest that individual clinician differences shape the interaction between human and machine in critical ways that researchers do not yet fully understand. The analysis, published in Nature Medicine, is based on data from an earlier working paper by the same research group released by the National Bureau of Economic Research.

In some instances, the research showed, use of AI can interfere with a radiologist’s performance and interfere with the accuracy of their interpretation.

“We find that different radiologists, indeed, react differently to AI assistance – some are helped while others are hurt by it,” said co-senior author Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.

“What this means is that we should not look at radiologists as a uniform population and consider just the ‘average’ effect of AI on their performance,” he said. “To maximize benefits and minimize harm, we need to personalize assistive AI systems.”

The findings underscore the importance of carefully calibrated implementation of AI into clinical practice, but they should in no way discourage the adoption of AI in radiologists’ offices and clinics, the researchers said.

Instead, the results should signal the need to better understand how humans and AI interact and to design carefully calibrated approaches that boost human performance rather than hurt it.

“Clinicians have different levels of expertise, experience, and decision-making styles, so ensuring that AI reflects this diversity is critical for targeted implementation,” said Feiyang “Kathy” Yu, who conducted the work while at the Rajpurkar lab with co-first author on the paper with Alex Moehring at the MIT Sloan School of Management.

“Individual factors and variation would be key in ensuring that AI advances rather than interferes with performance and, ultimately, with diagnosis,” Yu said.

AI tools affected different radiologists differently

While previous research has shown that AI assistants can, indeed, boost radiologists’ diagnostic performance, these studies have looked at radiologists as a whole without accounting for variability from radiologist to radiologist.

In contrast, the new study looks at how individual clinician factors – area of specialty, years of practice, prior use of AI tools – come into play in human-AI collaboration.

The researchers examined how AI tools affected the performance of 140 radiologists on 15 X-ray diagnostic tasks – how reliably the radiologists were able to spot telltale features on an image and make an accurate diagnosis. The analysis involved 324 patient cases with 15 pathologies: abnormal conditions captured on X-rays of the chest.

To determine how AI affected doctors’ ability to spot and correctly identify problems, the researchers used advanced computational methods that captured the magnitude of change in performance when using AI and when not using it.

The effect of AI assistance was inconsistent and varied across radiologists, with the performance of some radiologists improving with AI and worsening in others.

AI tools influenced human performance unpredictably

AI’s effects on human radiologists’ performance varied in often surprising ways.

For instance, contrary to what the researchers expected, factors such how many years of experience a radiologist had, whether they specialised in thoracic, or chest, radiology, and whether they’d used AI readers before, did not reliably predict how an AI tool would affect a doctor’s performance.

Another finding that challenged the prevailing wisdom: Clinicians who had low performance at baseline did not benefit consistently from AI assistance. Some benefited more, some less, and some none at all. Overall, however, lower-performing radiologists at baseline had lower performance with or without AI. The same was true among radiologists who performed better at baseline. They performed consistently well, overall, with or without AI.

Then came a not-so-surprising finding: More accurate AI tools boosted radiologists’ performance, while poorly performing AI tools diminished the diagnostic accuracy of human clinicians.

While the analysis was not done in a way that allowed researchers to determine why this happened, the finding points to the importance of testing and validating AI tool performance before clinical deployment, the researchers said. Such pre-testing could ensure that inferior AI doesn’t interfere with human clinicians’ performance and, therefore, patient care.

What do these findings mean for the future of AI in the clinic?

The researchers cautioned that their findings do not provide an explanation for why and how AI tools seem to affect performance across human clinicians differently, but note that understanding why would be critical to ensuring that AI radiology tools augment human performance rather than hurt it.

To that end, the team noted, AI developers should work with physicians who use their tools to understand and define the precise factors that come into play in the human-AI interaction.

And, the researchers added, the radiologist-AI interaction should be tested in experimental settings that mimic real-world scenarios and reflect the actual patient population for which the tools are designed.

Apart from improving the accuracy of the AI tools, it’s also important to train radiologists to detect inaccurate AI predictions and to question an AI tool’s diagnostic call, the research team said. To achieve that, AI developers should ensure that they design AI models that can “explain” their decisions.

“Our research reveals the nuanced and complex nature of machine-human interaction,” said study co-senior author Nikhil Agarwal, professor of economics at MIT. “It highlights the need to understand the multitude of factors involved in this interplay and how they influence the ultimate diagnosis and care of patients.”

Source: Harvard Medical School

A Better View of Atherosclerotic Plaques with New Imaging Technique

Source: Wikimedia CC0

Researchers have developed a new catheter-based device that combines two powerful optical techniques to image atherosclerotic plaques that can build up inside the heart’s coronary arteries. By providing new details about plaque, the device could help clinicians and researchers improve treatments for preventing heart attacks and strokes.

“Atherosclerosis, leading to heart attacks and strokes, is the number one cause of death in Western societies – exceeding all combined cancer types – and, therefore, a major public health issue,” said research team member leader Laura Marcu from University of California, Davis. “Better clinical management made possible by advanced intravascular imaging tools will benefit patients by providing more accurate information to help cardiologists tailor treatment or by supporting the development of new therapies.”

In the Optica Publishing Group journal Biomedical Optics Express, researchers describe their new flexible device, which combines fluorescence lifetime imaging (FLIM) and polarisation-sensitive optical coherence tomography (PSOCT) to capture rich information about the composition, morphology and microstructure of atherosclerotic plaques. The work was a collaborative project with Brett Bouma and Martin Villiger, experts in OCT from the Wellman Center for Photomedicine at Massachusetts General Hospital.

“With further testing and development, our device could be used for longitudinal studies where intravascular imaging is obtained from the same patients at different timepoints, providing a picture of plaque evolution or response to therapeutic interventions,” said Julien Bec, first author of the paper. “This will be very valuable to better understand disease evolution, evaluate the efficacy of new drugs and treatments and guide stenting procedures used to restore normal blood flow.”

Gaining an unprecedented view

Most of what scientists know about how atherosclerosis forms and develops over time comes from histopathology studies of postmortem coronary specimens. Although the development of imaging systems such as intravascular ultrasound and intravascular OCT has made it possible to study plaques in living patients, there is still a need for improved methods and tools to investigate and characterise atherosclerosis.

To address this need, the researchers embarked on a multi-year research project to develop and validate multispectral FLIM as an intravascular imaging modality. FLIM can provide insights into features such as the composition of the extracellular matrix, the presence of inflammation and the degree of calcification inside an artery. In earlier work, they combined FLIM with intravascular ultrasound, and in this new work they combined it with PSOCT. PSOCT provides high-resolution morphological information along with birefringence and depolarisation measurements. When used together, FLIM and PSOCT provide an unprecedented amount of information on plaque morphology, microstructure and biochemical composition.

“Birefringence provides information about the plaque collagen, a key structural protein that helps with lesion stabilization, and depolarisation is related to lipid content that contributes to plaque destabilization,” said Bec. “Holistically, this hybrid approach can provide the most detailed picture of plaque characteristics of all intravascular imaging modalities reported to date.”

Getting two imaging modalities into one device

The development of multimodal intravascular imaging systems compatible with coronary catheterisation is technologically challenging. It requires flexible catheters < 1mm diameter that can operate in vessels with sharp twists and turns. A high imaging speed of around 100 frames/second is also necessary to limit cardiac motion artefacts and ensure proper imaging inside an artery.

To integrate FLIM and PSOCT into a single device without compromising the performance of either imaging modality, the researchers used optical components previously developed by Marcu’s lab and other research groups. Key to achieving high PSOCT performance was a newly designed rotary collimator with high light throughput and a high return loss, ie the ratio of power reflected back toward the light source compared to the power incident on the device. The catheter system they developed has similar dimensions and flexibility as the intravascular imaging devices that are currently in clinical use.

After testing the new system with artificial tissue to demonstrate basic functionality on well characterized samples, the researchers also showed that it could be used to measure properties of a healthy coronary artery removed from a pig. Finally, in vivo testing in swine hearts demonstrated that the hybrid catheter system’s performance was sufficient to support work toward clinical validation. These tests all showed that the FLIM-PSOCT catheter system could simultaneously acquire co-registered FLIM data over four distinct spectral bands and PSOCT backscattered intensity, birefringence and depolarization information.

Next, the researchers plan to use the intravascular imaging system to image plaques in ex vivo human coronary arteries. By comparing the optical signals acquired using the system with plaque characteristics identified by expert pathologists, they can better understand which features can be identified by FLIM-PSOCT and use this to develop prediction models. They also plan to move forward with testing in support of clinical validation of the system in patients.

Source: Optica

New, More Accurate Approach to Blood Tests for Determining Diabetes Risks

Photo by National Cancer Institute on Unsplash

A new approach to blood tests could potentially be used to estimate a patient’s risk of type 2 diabetes, according to a new study appearing in BMC’s Journal of Translational Medicine. Currently, the most commonly used inflammatory biomarker currently used to predict the risk of type 2 diabetes is high-sensitivity C-reactive protein (CRP). But new research has suggested that jointly assessing of biomarkers, rather than assessing each individually, would improve the chances of predicting diabetes risk and diabetic complications.

A study by Edith Cowan University (ECU) researcher Dan Wu investigated the connection between systematic inflammation, assessed by joint cumulative high-sensitivity CRP and another biomarker called monocyte to high-density lipoprotein ratio (MHR), and incident type 2 diabetes.

The study followed more than 40 800 non-diabetic participants over a near ten-year period, with more than 4800 of the participants developing diabetes over this period.

Wu said that of those patients presenting with type 2 diabetes, significant interaction between MHR and CRP was observed.

“Specifically, increases in the MHR in each CRP stratum increased the risk of type 2 diabetes; concomitant increases in MHR and CRP presented significantly higher incidence rates and risks of diabetes.

“Furthermore, the association between chronic inflammation (reflected by the joint cumulative MHR and CRP exposure) and incident diabetes was highly age- and sex-specific and influenced by hypertension, high cholesterol, or prediabetes. The addition of the MHR and CRP to the clinical risk model significantly improved the prediction of incident diabetes,” said Wu.

Biological sex a risk factor

The study found that females had a greater risk of type 2 diabetes conferred by joint increases in CRP and MHR, with Wu stating that sex hormones could account for these differences.

Wu said that the research findings corroborated the involvement of chronic inflammation in causing early-onset diabetes and merited specific attention.

“Epidemiological evidence indicates a consistent increase in early-onset diabetes, especially in developing countries. Leveraging this age-specific association between chronic inflammation and type 2 diabetes may be a promising method for achieving early identification of at-risk young adults and developing personalised interventions,” she added.

Wu noted that the chronic progressive nature of diabetes and the enormous burden of subsequent comorbidities further highlighted the urgent need to address this critical health issue.

Although aging and genetics are non-modifiable risk factors, other risk factors could be modified through lifestyle changes.

Inflammation is strongly influenced by life activities and metabolic conditions such as diet, sleep disruptions, chronic stress, and glucose and cholesterol dysregulation, thereby indicating the potential benefits of monitoring risk-related metabolic conditions.

Wu said that the dual advantages of cost effectiveness and the wide availability of cumulative MHR and CRP in current clinical settings, potentiated the widespread use of these measures as a convenient tool for predicting the risk of diabetes.

Source: Edith Cowan University

Terahertz Biosensor can Accurately Detect Skin Cancer

3D structure of a melanoma cell derived by ion abrasion scanning electron microscopy. Credit: Sriram Subramaniam/ National Cancer Institute

Researchers have developed a revolutionary biosensor using terahertz (THz) waves that can detect skin cancer with exceptional sensitivity, potentially paving the way for earlier and easier diagnoses. Published in the journal IEEE Transactions on Biomedical Engineering, the study presents a significant advancement in early cancer detection, thanks to a multidisciplinary collaboration of teams from Queen Mary University of London and the University of Glasgow.

“Traditional methods for detecting skin cancer often involve expensive, time-consuming, CT, PET scans and invasive higher frequencies technologies,” explains Dr Shohreh Nourinovin, Postdoctoral Research Associate at Queen Mary’s School of Electronic Engineering and Computer Science, and the study’s first author.

“Our biosensor offers a non-invasive and highly efficient solution, leveraging the unique properties of THz waves – a type of radiation with lower energy than X-rays, thus safe for humans – to detect subtle changes in cell characteristics.”

The key innovation lies in the biosensor’s design. Featuring tiny, asymmetric resonators on a flexible substrate, it can detect subtle changes in the properties of cells.

Unlike traditional methods that rely solely on refractive index, this device analyses a combination of parameters, including resonance frequency, transmission magnitude, and a value called “Full Width at Half Maximum” (FWHM). This comprehensive approach provides a richer picture of the tissue, allowing for more accurate differentiation between healthy and cancerous cells and to measure malignancy degree of the tissue.

In tests, the biosensor successfully differentiated between normal skin cells and basal cell carcinoma (BCC) cells, even at different concentrations. This ability to detect early-stage cancer holds immense potential for improving patient outcomes.

“The implications of this study extend far beyond skin cancer detection,” says Dr Nourinovin.

“This technology could be used for early detection of various cancers and other diseases, like Alzheimer’s, with potential applications in resource-limited settings due to its portability and affordability.”

Dr Nourinovin’s research journey wasn’t without its challenges.

Initially focusing on THz spectroscopy for cancer analysis, her project was temporarily halted due to the COVID pandemic. However, this setback led her to explore the potential of THz metasurfaces, a novel approach that sparked a new chapter in her research.

Source: Queen Mary University of London