Category: Lab Tests and Imaging

Researchers Demonstrate the Effect of Neurochemicals on fMRI Readings

Photo by Fakurian Design on Unsplash

The brain is an incredibly complex and active organ that uses electricity and chemicals to transmit and receive signals between its sub-regions. Researchers have explored various technologies to directly or indirectly measure these signals to learn more about the brain. Functional magnetic resonance imaging (fMRI), for example, allows them to detect brain activity via changes related to blood flow.

Yen-Yu Ian Shih, PhD, professor of neurology and associate director of UNC’s Biomedical Research Imaging Center, and his fellow lab members have long been curious about how neurochemicals in the brain regulate and influence neural activity, blood flow, and subsequently, fMRI measurement in the brain.

A new study by the lab has confirmed their suspicions that fMRI interpretation is not as straightforward as it seems.

“Neurochemical signalling to blood vessels is less frequently considered when interpreting fMRI data,” said Shih, who also leads the Center for Animal MRI. “In our study on rodent models, we showed that neurochemicals, aside from their well-known signalling actions to typical brain cells, also signal to blood vessels, and this could have significant contributions to fMRI measurements.”

Their findings, published in Nature Communications, stem from the installation and upgrade of two 9.4-Tesla animal MRI systems and a 7-Tesla human MRI system at the Biomedical Research Imaging Center.

When activity in neurons increases in a specific brain region, blood flow and oxygen levels increase in the area, usually proportionate to the strength of neural activity. Researchers decided to use this phenomenon to their advantage and eventually developed fMRI techniques to detect these changes in the brain.

For years, this method has helped researchers better understand brain function and influenced their knowledge about human cognition and behaviour. The new study from Shih’s lab, however, demonstrates that this well-established neuro-vascular relationship does not apply across the entire brain because cell types and neurochemicals vary across brain areas.

Shih’s team focused on the striatum, a region deep in the brain involved in cognition, motivation, reward, and sensorimotor function, to identify the ways in which certain neurochemicals and cell types in the brain region may be influencing fMRI signals.

For their study, Shih’s lab controlled neural activity in rodent brains using a light-based technique, while measuring electrical, optical, chemical, and vascular signals to help interpret fMRI data. The researchers then manipulated the brain’s chemical signalling by injecting different drugs into the brain and evaluated how the drugs influenced the fMRI responses.

They found that in some cases, neural activity in the striatum went up, but the blood vessels constricted, causing negative fMRI signals. This is related to internal opioid signaling in the striatum. Conversely, when another neurochemical, dopamine, predominated signaling in striatum, the fMRI signals were positive.

“We identified several instances where fMRI signals in the striatum can look quite different from expected,” said Shih. “It’s important to be mindful of underlying neurochemical signaling that can influence blood vessels or perivascular cells in parallel, potentially overshadowing the fMRI signal changes triggered by neural activity.”

Members of Shih’s lab, including first- and co-authors Dominic Cerri, PhD, and Lindsey Walton, PhD, travelled to the University of Sussex in the United Kingdom, where they were able to perform experiments and further demonstrate the opioid’s vascular effects.

They also collected human fMRI data at UNC’s 7-Tesla MRI system and collaborated with researchers at Stanford University to explore possible findings using transcranial magnetic stimulation, a procedure that uses magnetic fields to stimulate the human brain.

By better understanding fMRI signaling, basic science researchers and physician scientists will be able to provide more precise insights into neural activity changes in healthy brains, as well as in cases of neurological and neuropsychiatric disorders.

Source: UNC School of Medicine

Is AI a Help or Hindrance to Radiologists? It’s Down to the Doctor

New research shows AI isn’t always a help for radiologists

Photo by Anna Shvets

One of the most touted promises of medical artificial intelligence tools is their ability to augment human clinicians’ performance by helping them interpret images such as X-rays and CT scans with greater precision to make more accurate diagnoses.

But the benefits of using AI tools on image interpretation appear to vary from clinician to clinician, according to new research led by investigators at Harvard Medical School, working with colleagues at MIT and Stanford.

The study findings suggest that individual clinician differences shape the interaction between human and machine in critical ways that researchers do not yet fully understand. The analysis, published in Nature Medicine, is based on data from an earlier working paper by the same research group released by the National Bureau of Economic Research.

In some instances, the research showed, use of AI can interfere with a radiologist’s performance and interfere with the accuracy of their interpretation.

“We find that different radiologists, indeed, react differently to AI assistance – some are helped while others are hurt by it,” said co-senior author Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.

“What this means is that we should not look at radiologists as a uniform population and consider just the ‘average’ effect of AI on their performance,” he said. “To maximize benefits and minimize harm, we need to personalize assistive AI systems.”

The findings underscore the importance of carefully calibrated implementation of AI into clinical practice, but they should in no way discourage the adoption of AI in radiologists’ offices and clinics, the researchers said.

Instead, the results should signal the need to better understand how humans and AI interact and to design carefully calibrated approaches that boost human performance rather than hurt it.

“Clinicians have different levels of expertise, experience, and decision-making styles, so ensuring that AI reflects this diversity is critical for targeted implementation,” said Feiyang “Kathy” Yu, who conducted the work while at the Rajpurkar lab with co-first author on the paper with Alex Moehring at the MIT Sloan School of Management.

“Individual factors and variation would be key in ensuring that AI advances rather than interferes with performance and, ultimately, with diagnosis,” Yu said.

AI tools affected different radiologists differently

While previous research has shown that AI assistants can, indeed, boost radiologists’ diagnostic performance, these studies have looked at radiologists as a whole without accounting for variability from radiologist to radiologist.

In contrast, the new study looks at how individual clinician factors – area of specialty, years of practice, prior use of AI tools – come into play in human-AI collaboration.

The researchers examined how AI tools affected the performance of 140 radiologists on 15 X-ray diagnostic tasks – how reliably the radiologists were able to spot telltale features on an image and make an accurate diagnosis. The analysis involved 324 patient cases with 15 pathologies: abnormal conditions captured on X-rays of the chest.

To determine how AI affected doctors’ ability to spot and correctly identify problems, the researchers used advanced computational methods that captured the magnitude of change in performance when using AI and when not using it.

The effect of AI assistance was inconsistent and varied across radiologists, with the performance of some radiologists improving with AI and worsening in others.

AI tools influenced human performance unpredictably

AI’s effects on human radiologists’ performance varied in often surprising ways.

For instance, contrary to what the researchers expected, factors such how many years of experience a radiologist had, whether they specialised in thoracic, or chest, radiology, and whether they’d used AI readers before, did not reliably predict how an AI tool would affect a doctor’s performance.

Another finding that challenged the prevailing wisdom: Clinicians who had low performance at baseline did not benefit consistently from AI assistance. Some benefited more, some less, and some none at all. Overall, however, lower-performing radiologists at baseline had lower performance with or without AI. The same was true among radiologists who performed better at baseline. They performed consistently well, overall, with or without AI.

Then came a not-so-surprising finding: More accurate AI tools boosted radiologists’ performance, while poorly performing AI tools diminished the diagnostic accuracy of human clinicians.

While the analysis was not done in a way that allowed researchers to determine why this happened, the finding points to the importance of testing and validating AI tool performance before clinical deployment, the researchers said. Such pre-testing could ensure that inferior AI doesn’t interfere with human clinicians’ performance and, therefore, patient care.

What do these findings mean for the future of AI in the clinic?

The researchers cautioned that their findings do not provide an explanation for why and how AI tools seem to affect performance across human clinicians differently, but note that understanding why would be critical to ensuring that AI radiology tools augment human performance rather than hurt it.

To that end, the team noted, AI developers should work with physicians who use their tools to understand and define the precise factors that come into play in the human-AI interaction.

And, the researchers added, the radiologist-AI interaction should be tested in experimental settings that mimic real-world scenarios and reflect the actual patient population for which the tools are designed.

Apart from improving the accuracy of the AI tools, it’s also important to train radiologists to detect inaccurate AI predictions and to question an AI tool’s diagnostic call, the research team said. To achieve that, AI developers should ensure that they design AI models that can “explain” their decisions.

“Our research reveals the nuanced and complex nature of machine-human interaction,” said study co-senior author Nikhil Agarwal, professor of economics at MIT. “It highlights the need to understand the multitude of factors involved in this interplay and how they influence the ultimate diagnosis and care of patients.”

Source: Harvard Medical School

A Better View of Atherosclerotic Plaques with New Imaging Technique

Source: Wikimedia CC0

Researchers have developed a new catheter-based device that combines two powerful optical techniques to image atherosclerotic plaques that can build up inside the heart’s coronary arteries. By providing new details about plaque, the device could help clinicians and researchers improve treatments for preventing heart attacks and strokes.

“Atherosclerosis, leading to heart attacks and strokes, is the number one cause of death in Western societies – exceeding all combined cancer types – and, therefore, a major public health issue,” said research team member leader Laura Marcu from University of California, Davis. “Better clinical management made possible by advanced intravascular imaging tools will benefit patients by providing more accurate information to help cardiologists tailor treatment or by supporting the development of new therapies.”

In the Optica Publishing Group journal Biomedical Optics Express, researchers describe their new flexible device, which combines fluorescence lifetime imaging (FLIM) and polarisation-sensitive optical coherence tomography (PSOCT) to capture rich information about the composition, morphology and microstructure of atherosclerotic plaques. The work was a collaborative project with Brett Bouma and Martin Villiger, experts in OCT from the Wellman Center for Photomedicine at Massachusetts General Hospital.

“With further testing and development, our device could be used for longitudinal studies where intravascular imaging is obtained from the same patients at different timepoints, providing a picture of plaque evolution or response to therapeutic interventions,” said Julien Bec, first author of the paper. “This will be very valuable to better understand disease evolution, evaluate the efficacy of new drugs and treatments and guide stenting procedures used to restore normal blood flow.”

Gaining an unprecedented view

Most of what scientists know about how atherosclerosis forms and develops over time comes from histopathology studies of postmortem coronary specimens. Although the development of imaging systems such as intravascular ultrasound and intravascular OCT has made it possible to study plaques in living patients, there is still a need for improved methods and tools to investigate and characterise atherosclerosis.

To address this need, the researchers embarked on a multi-year research project to develop and validate multispectral FLIM as an intravascular imaging modality. FLIM can provide insights into features such as the composition of the extracellular matrix, the presence of inflammation and the degree of calcification inside an artery. In earlier work, they combined FLIM with intravascular ultrasound, and in this new work they combined it with PSOCT. PSOCT provides high-resolution morphological information along with birefringence and depolarisation measurements. When used together, FLIM and PSOCT provide an unprecedented amount of information on plaque morphology, microstructure and biochemical composition.

“Birefringence provides information about the plaque collagen, a key structural protein that helps with lesion stabilization, and depolarisation is related to lipid content that contributes to plaque destabilization,” said Bec. “Holistically, this hybrid approach can provide the most detailed picture of plaque characteristics of all intravascular imaging modalities reported to date.”

Getting two imaging modalities into one device

The development of multimodal intravascular imaging systems compatible with coronary catheterisation is technologically challenging. It requires flexible catheters < 1mm diameter that can operate in vessels with sharp twists and turns. A high imaging speed of around 100 frames/second is also necessary to limit cardiac motion artefacts and ensure proper imaging inside an artery.

To integrate FLIM and PSOCT into a single device without compromising the performance of either imaging modality, the researchers used optical components previously developed by Marcu’s lab and other research groups. Key to achieving high PSOCT performance was a newly designed rotary collimator with high light throughput and a high return loss, ie the ratio of power reflected back toward the light source compared to the power incident on the device. The catheter system they developed has similar dimensions and flexibility as the intravascular imaging devices that are currently in clinical use.

After testing the new system with artificial tissue to demonstrate basic functionality on well characterized samples, the researchers also showed that it could be used to measure properties of a healthy coronary artery removed from a pig. Finally, in vivo testing in swine hearts demonstrated that the hybrid catheter system’s performance was sufficient to support work toward clinical validation. These tests all showed that the FLIM-PSOCT catheter system could simultaneously acquire co-registered FLIM data over four distinct spectral bands and PSOCT backscattered intensity, birefringence and depolarization information.

Next, the researchers plan to use the intravascular imaging system to image plaques in ex vivo human coronary arteries. By comparing the optical signals acquired using the system with plaque characteristics identified by expert pathologists, they can better understand which features can be identified by FLIM-PSOCT and use this to develop prediction models. They also plan to move forward with testing in support of clinical validation of the system in patients.

Source: Optica

New, More Accurate Approach to Blood Tests for Determining Diabetes Risks

Photo by National Cancer Institute on Unsplash

A new approach to blood tests could potentially be used to estimate a patient’s risk of type 2 diabetes, according to a new study appearing in BMC’s Journal of Translational Medicine. Currently, the most commonly used inflammatory biomarker currently used to predict the risk of type 2 diabetes is high-sensitivity C-reactive protein (CRP). But new research has suggested that jointly assessing of biomarkers, rather than assessing each individually, would improve the chances of predicting diabetes risk and diabetic complications.

A study by Edith Cowan University (ECU) researcher Dan Wu investigated the connection between systematic inflammation, assessed by joint cumulative high-sensitivity CRP and another biomarker called monocyte to high-density lipoprotein ratio (MHR), and incident type 2 diabetes.

The study followed more than 40 800 non-diabetic participants over a near ten-year period, with more than 4800 of the participants developing diabetes over this period.

Wu said that of those patients presenting with type 2 diabetes, significant interaction between MHR and CRP was observed.

“Specifically, increases in the MHR in each CRP stratum increased the risk of type 2 diabetes; concomitant increases in MHR and CRP presented significantly higher incidence rates and risks of diabetes.

“Furthermore, the association between chronic inflammation (reflected by the joint cumulative MHR and CRP exposure) and incident diabetes was highly age- and sex-specific and influenced by hypertension, high cholesterol, or prediabetes. The addition of the MHR and CRP to the clinical risk model significantly improved the prediction of incident diabetes,” said Wu.

Biological sex a risk factor

The study found that females had a greater risk of type 2 diabetes conferred by joint increases in CRP and MHR, with Wu stating that sex hormones could account for these differences.

Wu said that the research findings corroborated the involvement of chronic inflammation in causing early-onset diabetes and merited specific attention.

“Epidemiological evidence indicates a consistent increase in early-onset diabetes, especially in developing countries. Leveraging this age-specific association between chronic inflammation and type 2 diabetes may be a promising method for achieving early identification of at-risk young adults and developing personalised interventions,” she added.

Wu noted that the chronic progressive nature of diabetes and the enormous burden of subsequent comorbidities further highlighted the urgent need to address this critical health issue.

Although aging and genetics are non-modifiable risk factors, other risk factors could be modified through lifestyle changes.

Inflammation is strongly influenced by life activities and metabolic conditions such as diet, sleep disruptions, chronic stress, and glucose and cholesterol dysregulation, thereby indicating the potential benefits of monitoring risk-related metabolic conditions.

Wu said that the dual advantages of cost effectiveness and the wide availability of cumulative MHR and CRP in current clinical settings, potentiated the widespread use of these measures as a convenient tool for predicting the risk of diabetes.

Source: Edith Cowan University

Terahertz Biosensor can Accurately Detect Skin Cancer

3D structure of a melanoma cell derived by ion abrasion scanning electron microscopy. Credit: Sriram Subramaniam/ National Cancer Institute

Researchers have developed a revolutionary biosensor using terahertz (THz) waves that can detect skin cancer with exceptional sensitivity, potentially paving the way for earlier and easier diagnoses. Published in the journal IEEE Transactions on Biomedical Engineering, the study presents a significant advancement in early cancer detection, thanks to a multidisciplinary collaboration of teams from Queen Mary University of London and the University of Glasgow.

“Traditional methods for detecting skin cancer often involve expensive, time-consuming, CT, PET scans and invasive higher frequencies technologies,” explains Dr Shohreh Nourinovin, Postdoctoral Research Associate at Queen Mary’s School of Electronic Engineering and Computer Science, and the study’s first author.

“Our biosensor offers a non-invasive and highly efficient solution, leveraging the unique properties of THz waves – a type of radiation with lower energy than X-rays, thus safe for humans – to detect subtle changes in cell characteristics.”

The key innovation lies in the biosensor’s design. Featuring tiny, asymmetric resonators on a flexible substrate, it can detect subtle changes in the properties of cells.

Unlike traditional methods that rely solely on refractive index, this device analyses a combination of parameters, including resonance frequency, transmission magnitude, and a value called “Full Width at Half Maximum” (FWHM). This comprehensive approach provides a richer picture of the tissue, allowing for more accurate differentiation between healthy and cancerous cells and to measure malignancy degree of the tissue.

In tests, the biosensor successfully differentiated between normal skin cells and basal cell carcinoma (BCC) cells, even at different concentrations. This ability to detect early-stage cancer holds immense potential for improving patient outcomes.

“The implications of this study extend far beyond skin cancer detection,” says Dr Nourinovin.

“This technology could be used for early detection of various cancers and other diseases, like Alzheimer’s, with potential applications in resource-limited settings due to its portability and affordability.”

Dr Nourinovin’s research journey wasn’t without its challenges.

Initially focusing on THz spectroscopy for cancer analysis, her project was temporarily halted due to the COVID pandemic. However, this setback led her to explore the potential of THz metasurfaces, a novel approach that sparked a new chapter in her research.

Source: Queen Mary University of London

Experimental Model Identifies New Drug–drug Interactions

Photo by Myriam Zilles on Unsplash

When taking oral drugs, transporter proteins found on cells that line the gastrointestinal tract facilitate their entry into the bloodstream. But for many drugs, it is not known which of those transporters they use to exit the digestive tract.

Identifying the transporters used by specific drugs could help to improve patient treatment because if two drugs rely on the same transporter, they can interfere with each other and should not be prescribed together.

Researchers at MIT, Brigham and Women’s Hospital, and Duke University have developed a multipronged strategy to identify the transporters used by different drugs, which appears in Nature Biomedical Engineering. Their approach, which makes use of both tissue models and machine-learning algorithms, has already revealed that a commonly prescribed antibiotic and a blood thinner can interfere with each other.

“One of the challenges in modelling absorption is that drugs are subject to different transporters. This study is all about how we can model those interactions, which could help us make drugs safer and more efficacious, and predict potential toxicities that may have been difficult to predict until now,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

Learning more about which transporters help drugs pass through the digestive tract could also help drug developers improve the absorbability of new drugs by adding excipients that enhance their interactions with transporters.

Former MIT postdocs Yunhua Shi and Daniel Reker are the lead authors of the study.

Drug transport

Previous studies have identified several transporters in the GI tract that help drugs pass through the intestinal lining. Three of the most commonly used, which were the focus of the new study, are BCRP, MRP2, and PgP.

For this study, Traverso and his colleagues adapted a tissue model they had developed in 2020 to measure a given drug’s absorbability. This experimental setup, based on pig intestinal tissue grown in the laboratory, can be used to systematically expose tissue to different drug formulations and measure how well they are absorbed.

To study the role of individual transporters within the tissue, the researchers used short strands of RNA called siRNA to knock down the expression of each transporter. In each section of tissue, they knocked down different combinations of transporters, which enabled them to study how each transporter interacts with many different drugs.

“There are a few roads that drugs can take through tissue, but you don’t know which road. We can close the roads separately to figure out, if we close this road, does the drug still go through? If the answer is yes, then it’s not using that road,” Traverso says.

The researchers tested 23 commonly used drugs using this system, allowing them to identify transporters used by each of those drugs. Then, they trained a machine-learning model on that data, as well as data from several drug databases. The model learned to make predictions of which drugs would interact with which transporters, based on similarities between the chemical structures of the drugs.

Using this model, the researchers analysed a new set of 28 currently used drugs, as well as 1595 experimental drugs. This screen yielded nearly 2 million predictions of potential drug interactions. Among them was the prediction that doxycycline, an antibiotic, could interact with warfarin, a commonly prescribed blood-thinner. Doxycycline was also predicted to interact with digoxin, which is used to treat heart failure, levetiracetam, an antiseizure medication, and tacrolimus, an immunosuppressant.

Identifying interactions

To test those predictions, the researchers looked at data from about 50 patients who had been taking one of those three drugs when they were prescribed doxycycline. This data, which came from a patient database at Massachusetts General Hospital and Brigham and Women’s Hospital, showed that when doxycycline was given to patients already taking warfarin, the level of warfarin in the patients’ bloodstream went up, then went back down again after they stopped taking doxycycline.

That data also confirmed the model’s predictions that the absorption of doxycycline is affected by digoxin, levetiracetam, and tacrolimus. Only one of those drugs, tacrolimus, had been previously suspected to interact with doxycycline.

“These are drugs that are commonly used, and we are the first to predict this interaction using this accelerated in silico and in vitro model,” Traverso says. “This kind of approach gives you the ability to understand the potential safety implications of giving these drugs together.”

Source: Massachusetts Institute of Technology

“Movies” with Colour and Music Visualise Brain Activity Data in Beautiful Detail

Novel toolkit translates neuroimaging data into audiovisual formats to aid interpretation

Simple audiovisualisation of wide field neural activity. Adapted from Thibodeaux et al., 2024, PLOS ONE, CC-BY 4.0

Complex neuroimaging data can be explored through translation into an audiovisual format – a video with accompanying musical soundtrack – to help interpret what happens in the brain when performing certain behaviours. David Thibodeaux and colleagues at Columbia University, US, present this technique in the open-access journal PLOS ONE on February 21, 2024. Examples of these beautiful “brain movies” are included below.

Recent technological advances have made it possible for multiple components of activity in the awake brain to be recorded in real time. Scientists can now observe, for instance, what happens in a mouse’s brain when it performs specific behaviours or receives a certain stimulus. However, such research produces large quantities of data that can be difficult to intuitively explore to gain insights into the biological mechanisms behind brain activity patterns.

Prior research has shown that some brain imaging data can be translated into audible representations. Building on such approaches, Thibodeaux and colleagues developed a flexible toolkit that enables translation of different types of brain imaging data – and accompanying video recordings of lab animal behaviour – into audiovisual representations.

The researchers then demonstrated the new technique in three different experimental settings, showing how audiovisual representations can be prepared with data from various brain imaging approaches, including 2D wide-field optical mapping (WFOM) and 3D swept confocally aligned planar excitation (SCAPE) microscopy.

The toolkit was applied to previously-collected WFOM data that detected both neural activity and brain blood flow changes in mice engaging in different behaviours, such as running or grooming. Neuronal data was represented by piano sounds that struck in time with spikes in brain activity, with the volume of each note indicating magnitude of activity and its pitch indicating the location in the brain where the activity occurred. Meanwhile, blood flow data were represented by violin sounds. The piano and violin sounds, played in real time, demonstrate the coupled relationship between neuronal activity and blood flow. Viewed alongside a video of the mouse, a viewer can discern which patterns of brain activity corresponded to different behaviours.

The authors note that their toolkit is not a substitute for quantitative analysis of neuroimaging data. Nonetheless, it could help scientists screen large datasets for patterns that might otherwise have gone unnoticed and are worth further analysis.

The authors add: “Listening to and seeing representations of [brain activity] data is an immersive experience that can tap into this capacity of ours to recognise and interpret patterns (consider the online security feature that asks you to “select traffic lights in this image” – a challenge beyond most computers, but trivial for our brains)…[It] is almost impossible to watch and focus on both the time-varying [brain activity] data and the behavior video at the same time, our eyes will need to flick back and forth to see things that happen together. You generally need to continually replay clips over and over to be able to figure out what happened at a particular moment. Having an auditory representation of the data makes it much simpler to see (and hear) when things happen at the exact same time.”

  1. Audiovisualisation of neural activity from the dorsal surface of the thinned skull cortex of the awake mouse.
  2. Audiovisualisation of neural activity from the dorsal surface of the thinned skull cortex of the ketamine/xylazine anaesthetised mouse.
  3. Audiovisualisation of SCAPE microscopy data capturing calcium activity in apical dendrites in the awake mouse brain.
  4. Audiovisualisation of neural activity and blood flow from the dorsal surface of the thinned skull cortex of the awake mouse.

Video Credits: Thibodeaux et al., 2024, PLOS ONE, CC-BY 4.0

Visualising Multiple Sclerosis with a New MRI Procedure

This is a pseudo-colored image of high-resolution gradient-echo MRI scan of a fixed cerebral hemisphere from a person with multiple sclerosis. Credit: Govind Bhagavatheeshwaran, Daniel Reich, National Institute of Neurological Disorders and Stroke, National Institutes of Health

A key feature of multiple sclerosis (MS) is that it causes the patient’s own immune system to attack and destroy the myelin sheaths in the central nervous system. To date, it hasn’t been possible to visualise the myelin sheaths well enough to use this information for the diagnosis and monitoring of MS.  Now researchers have developed a new magnetic resonance imaging (MRI) procedure that maps the condition of the myelin sheaths more accurately than was previously possible.

The researchers successfully tested the procedure on healthy people for the first time, and published their results in Magnetic Resonance in Medicine.

In the future, the MRI system with its special head scanner could help doctors to recognise MS at an early stage and better monitor the progression of the disease.

This technology, developed by the researchers at ETH Zurich and University of Zurich, led by Markus Weiger and Emily Baadsvik from the Institute for Biomedical Engineering, could also facilitate the development of new drugs for MS. But it doesn’t end there: the new MRI method could also be used by researchers to better visualise other solid tissue types such as connective tissue, tendons and ligaments.

Quantitative myelin maps

Conventional MRI devices capture only inaccurate, indirect images of the myelin sheaths because these devices typically work by reacting to water molecules in the body that have been stimulated by radio waves in a strong magnetic field.

But the myelin sheaths, which wrap around the nerve fibres in several layers, consist mainly of fatty tissue and proteins. That said, there is some water – known as myelin water – trapped between these layers.

Standard MRIs build their images primarily using the signals of the hydrogen atoms in this myelin water, rather than imaging the myelin sheaths directly.

The ETH researchers’ new MRI method solves this problem and measures the myelin content directly.

It puts numerical values on MRI images of the brain to show how much myelin is present in a particular area compared to other areas of the image.

A number 8, for instance, means that the myelin content at this point is only 8 percent of a maximum value of 100, which indicates a significant thinning of the myelin sheaths.

Essentially, the darker the area and the smaller the number in the image, the more the myelin sheaths have been reduced.

This information ought to enable doctors to better assess the severity and progression of MS.

Measuring signals within millionths of a second

It is difficult however to image the myelin sheaths directly, since the signals that the MRI triggers in the tissue are very short-lived; the signals that emanate from the myelin water last much longer.

“Put simply, the hydrogen atoms in myelin tissue move less freely than those in myelin water. That means they generate much briefer signals, which disappear again after a few microseconds,” Weiger says, adding: “And bearing in mind a microsecond is a millionth of a second, that’s a very short time indeed.” A conventional MRI scanner can’t capture these fleeting signals because it doesn’t take the measurements fast enough.

To solve this problem, the researchers used a specially customised MRI head scanner that they have developed over the past ten years together with the companies Philips and Futura.

This scanner is characterised by a particularly strong gradient in the magnetic field.

“The greater the change in magnetic field strength generated by the three scanner coils, the faster information about the position of hydrogen atoms can be recorded,” Baadsvik says.

Generating such a strong gradient calls for a strong current and a sophisticated design.

As the researchers scan only the head, the magnetic field is more contained and concentrated than with conventional devices.

In addition, the system can quickly switch from transmitting radio waves to receiving signals; the researchers and their industry partners have developed a special circuit for this purpose.

The researchers have already successfully tested their MRI procedure on tissue samples from MS patients and on two healthy individuals. Next, they want to test it on MS patients themselves. Whether the new MRI head scanner will make its way into hospitals in the future now depends on the medical industry. “We’ve shown that our process works,” Weiger says. “Now it’s up to industry partners to implement it and bring it to market.”

Source: ETH Zurich

New Ultrasound Method can Predict Risk of Preterm Delivery

Photo by Mart Production on Pexels

Researchers have developed a way to use ultrasound to estimate the risk of delivering a baby preterm. The new method measures microstructural changes in a woman’s cervix using quantitative ultrasound. The method works as early as 23 weeks into a pregnancy, according to the research, which is published in the American Journal of Obstetrics & Gynecology Maternal Fetal Medicine.

The current method for assessing a woman’s risk of preterm birth is based solely on whether she has previously given birth prematurely. This means there has been no way to assess risk in a first-time pregnancy.

“Today, clinicians wait for signs and symptoms of a preterm birth,” such as a ruptured membrane, explained lead author Barbara McFarlin, a professor emeritus of nursing at University of Illinois Chicago.

“Our technique would be helpful in making decisions based on the tissue and not just on symptoms.”

The new method is the result of more than 20 years of collaboration between researchers in nursing and engineering at UIC and University of Illinois Urbana-Champaign. In a study of 429 women who gave birth without induction at the University of Illinois Hospital, the new method was effective at predicting the risk of preterm births during first-time pregnancies.

And for women who were having a subsequent pregnancy, combing the data from quantitative ultrasound with the woman’s delivery history was more effective at assessing risk than just using her history.

The new approach differs from a traditional ultrasound where a picture is produced from the data received.

In quantitative ultrasound, a traditional ultrasound is performed but the radio frequency data itself is read and analysed to determine tissue characteristics.

The study is the culmination of a research partnership that began in 2001 when McFarlin was a nursing PhD student at UIC. Having previously worked as a nurse midwife and sonographer, she had noticed that there were differences in the appearance of the cervix in women who went on to deliver preterm.

She was interested in quantifying this and discovered that “no one was looking at it.”

She was put in touch with Bill O’Brien, a UIUC professor of electrical and computer engineering, who was studying ways to use quantitative ultrasound data in health research.

Together, over the past 22 years, they established that quantitative ultrasound could detect changes in the cervix and, as McFarlin had suspected long ago, that those changes help predict the risk of preterm delivery.

The preterm birth rate hovers around 10-15% of pregnancies, O’Brien said.

“That’s a very, very high percentage to not know what is happening,” he said.

If a clinician could know at 23 weeks that there was a risk of preterm birth, they would likely conduct extra appointments to keep an eye on the foetus, the researchers said.

But since there had previously been no routine way to assess preterm birth risk this early, there have been no studies to show what sort of interventions would be helpful in delaying labour.

This study, O’Brien explains, will allow other researchers to “start studying processes by which you might be able to prevent or delay preterm birth.”

Source: University of Illinois Chicago

Foundations Laid for Standardised PET Examination of Diffuse Gliomas

Photo by Mart Production on Pexels

Diffuse gliomas are malignant brain tumours that cannot be optimally examined by means of conventional MRI imaging. So-called amino acid PET (positron emission tomography) scans are better able to image the activity and spread of gliomas. An international team of researchers from the RANO Working Group have drawn up the first ever international criteria for the standardised imaging of gliomas using amino acid PET. It has published its results in the journal The Lancet Oncology.

PET uses a radioactive tracer to measure metabolic processes in the body. Amino acid PET is used in the diagnosis of diffuse gliomas, with tracers that work on a protein basis (amino acids) and accumulate in brain tumours.

The Response Assessment in Neuro-Oncology (RANO) Working Group is an international, multidisciplinary consortium founded to develop standardised new response criteria for clinical studies relating to brain tumours.

Under the joint leadership of nuclear physician Nathalie Albert from LMU and oncologist Professor Matthias Preusser from the Medical University of Vienna, the RANO group has developed new criteria for assessing the success of therapies for diffuse gliomas.

Nathalie Albert explains: “PET imaging with radioactively labelled amino acids has proven extremely valuable in neuro-oncology and permits reliable representation of the activity and extension of gliomas. Although amino acid PET has been used for years, it had not been evaluated in a structured manner before now. In contrast to MRI-based diagnostics, there have been no criteria for interpreting these PET images.” According to the researchers, the new criteria allow PET to be used in clinical studies and everyday clinical practice and create a foundation for future research and the comparison of treatments for improved therapies.

New criteria for PET examinations of brain tumours

Diffuse gliomas are malignant brain tumorus that cannot be optimally examined by means of conventional MRI imaging. So-called amino acid PET scans are better able to image the activity and spread of gliomas.

These malignant brain tumours develop out of glial cells and are generally aggressive and difficult to treat.

The RANO group has developed criteria that permit evaluation of the success of treatment using PET. Called PET RANO 1.0, these PET-based criteria open up new possibilities for the standardised assessment of diffuse gliomas.

Source: Ludwig-Maximilians-Universität München