Category: Lab Tests and Imaging

Radiology Helps Treat Chronic Pain

Dr Winter performing a CT-guided interventional procedure. Photo: Supploed

Radiology encompasses more than just imaging. It is a medical field that uses various imaging techniques to diagnose conditions, guide minimally invasive procedures and, much to the relief of agonised patients, treat chronic pain.

‘Traditionally, radiology is known as a modality where causes of pain are only diagnosed’, says Dr Arthur Winter, a radiologist at SCP Radiology. ‘Interventional radiology has changed this. It is a rapidly developing branch of radiology involving minimally invasive procedures.  Pain management procedures are becoming a daily part of busy radiology departments.’

Simply put, interventional radiologists can use precisely targeted injections to intervene in the body’s perception of pain.

Understanding pain

Pain is a signal from the nervous system to let you know that something is wrong in your body. It is transmitted in a complex interaction between specialised nerves, the spinal cord and the brain. It can take many forms, be localised to one part of the body or appear to come from all over.

Pain can be acute or chronic

Harvard Medical School gives an overview of the difference between the two. ‘Most acute pain comes from damage to body tissues. It results from physical trauma such as a sports or exercise injury, a broken bone, a medical procedure or an accident like stubbing your toe, cutting a finger or bumping into something. The pain can feel sharp, aching or throbbing and often heals within a few days to a few weeks.’

In comparison, chronic pain lasts at least two to three months, often long after you have recovered from the injury or illness and may even become permanent. It could also be a result of lifestyle diseases. Symptoms and severity vary and may include a dull ache, shooting, burning, stabbing or electric shock-like pain and sensations like tingling and numbness. Chronic pain can be debilitating and affect your ability to perform activities of daily living.

Interventional pain management

Although some acute pain can be managed with interventions, it is patients with chronic pain that truly benefit. ‘These patients often use high doses of opioid painkillers that may cause nausea, constipation, anorexia and addiction. Other painkillers may also irritate the stomach lining and cause kidney problems,’ says Dr Winter.

An alternative that interventional pain management offers, involves injections called nerve blocks that target very specific nerves.

‘Most of these interventions prevent nerve impulses or pain signals from being transmitted, using long-acting local anaesthetics. The effect is usually temporary but the addition of cortisone – or steroids – often brings longer-lasting relief. In some cases, it could be appropriate to follow the temporary block with neurolysis, which is a permanent disruption or destruction of the target nerves.’

Although nerve blocks and other long-acting pain injections have been done for years, the scope of procedures is evolving fast. The involvement of radiologists has also grown.

Dr Winter explains. ‘Pain management has traditionally been the responsibility of clinicians and anaesthetists. During nerve block procedures, they were typically guided by their knowledge of anatomy or a continuous X-ray technique called fluoroscopy. As ultrasound became more widely available, many anaesthetists learned to do these procedures under ultrasound guidance.

‘These specialists still provide these treatments but, thanks to the availability of specialised imaging equipment, radiologists now have the tools and skill to do procedures under sophisticated image guidance. With CT guidance, some procedures can be performed with great accuracy while avoiding blood vessels and non-target organs,’ says Dr Winter.

‘A lower dose of medication is also needed if the needle is placed accurately next to the target nerves. It is therefore not surprising that this is increasingly becoming a responsibility of interventional radiologists.’

Other procedures where radiologists are involved include targeted Botox injections to treat the symptoms of Piriformis syndrome, epidural cortisone injections for inflammation in the spine and a procedure called epidural blood patch. This is to seal spinal fluid leaks that cause low-pressure headaches.

In conclusion, Dr Winter says chronic pain may cause poor quality of life and depression, often seen in patients with underlying cancer. ‘It is especially these patients who should be considered for interventions. There are, for example, very effective procedures to manage pain caused by pancreatic and pelvic cancers.

‘Specialists like oncologists and neurologists recognise the value of interventional radiology in pain management and work closely with us to support their patients. It is a growing branch of radiology that offers a minimally invasive solution and it’s quite rewarding to see patients regain some quality of life.’

Routine Bloods can Improve Cancer Screening in Patients with Abdominal Symptoms

Risk of cancer by specific site based on blood test abnormalities in symptomatic patients can help guide referral strategies

Photo by National Cancer Institute on Unsplash

Incorporating information from common blood tests can enhance cancer risk assessment in patients with abdominal symptoms, according to a study publishing July 30th in the open-access journal PLOS Medicine by Meena Rafiq from University College London, UK, and colleagues.

Early cancer detection is key to successful treatment. However, many undiagnosed cancer patients present to their primary care provider with non-specific symptoms that can be a result of several other benign causes, making it difficult to determine who warrants additional diagnostic testing or referral. Most guidelines focus on “alarm” symptoms specific to a given type of cancer to guide referrals. There is limited guidance on non-specific symptoms to guide cancer assessment and referral decisions across different cancer types.

In this study, researchers used data from the UK Clinical Practice Research Datalink to identify more than 470 000 patients aged 30 years or older who had visited a general practitioner due to abdominal pain or bloating. Within a year of that visit, approximately 9000 patients with abdominal pain and 1000 patients with bloating were diagnosed with cancer. The researchers looked at 19 abnormal blood test results collected during the initial primary care visit to see if they could predict who was more likely to be diagnosed with cancer.

Several blood abnormalities were predictive of cancer risk across sex and age groups. For example, in patients aged 30–59 years with abdominal symptoms, anaemia, low albumin, raised platelets, abnormal ferritin, and increased inflammatory markers strongly predicted a risk of undiagnosed cancer. Among older patients (aged 60 years and above), the presence of abdominal pain or bloating alone was enough to warrant a cancer referral.

The study also showed which types of cancer were most common based on age, sex, and blood test abnormality. For example, among women aged 50–59 years with anaemia and abdominal bloating, the most common types of cancer were bowel and ovarian cancer. This level of granularity can help guide providers on which diagnostic strategies to prioritise.

The study shows that common, routine blood test results can provide additional context in patients with non-specific abdominal symptoms to improve cancer risk assessment and identify patients who warrant additional testing and/or referral to a specialist.

The authors add, “Using existing blood tests can be an effective and affordable way to improve early diagnosis of cancer in people who see their GP with vague symptoms. Our study identified several commonly used GP blood tests where abnormal results increase a patient’s risk of having cancer and these can be used to diagnose cancer earlier.”

Provided by PLOS

“We Were the First Ones to Do It”: Innovative SA Study Takes TB Testing to People’s Homes

Tuberculosis bacteria. Credit: CDC

By Tiyese Jeranji

Most tuberculosis (TB) tests still require a trip to the clinic. Now, new technology has made it possible to test people at home. This could be a big deal for South Africa, where much TB goes undiagnosed. We unpack the findings and implications of a recent study into such TB home testing.

One of the biggest challenges in combatting TB in South Africa is that many people who fall ill with the disease are diagnosed late, or not diagnosed at all.

The World Health Organization (WHO) estimates that 280 000 people fell ill with TB in the country in 2022. Of these, roughly 66 000 were not diagnosed, and accordingly also not treated. Apart from the damage to the health of the people who are not diagnosed and treated, this also has implications for the further spread of TB since untreated TB is often infectious TB – people become non-infectious within a few weeks of starting TB treatment.

Typically, people who fall ill with TB only get diagnosed once they turn up at clinics with TB symptoms – this is called passive case-finding. In recent years, there has been a growing recognition that passive case-finding alone is not good enough if we want to diagnose more people more quickly. As a result, many people in South Africa considered to be at high risk of TB are now offered TB tests whether or not they have symptoms – an approach called targeted universal testing. Screening for TB using new mobile X-ray technology has also been piloted in the country.

Now, in the latest such active case-finding innovation, researchers have been offering people TB tests in the comfort of their own homes.

Dr Andrew Medina-Marino, a senior investigator at the Desmond Tutu Health Foundation (DTHF), tells Spotlight no one in the world was testing for TB at home until they recently started doing so at the DTHF’s new research site in the Eastern Cape.

The testing is done using a molecular testing device, roughly the size of a two litre Coke bottle, called the GeneXpert Edge. The GeneXpert Edge is a portable version of the GeneXpert machines that have been used in labs across the country to diagnose TB for over a decade.

The GeneXpert Edge is a standardised testing device that detects TB DNA in sputum. (Photo: Nasief Manie/Spotlight)

One challenge with the device was that it needed to be plugged into a power outlet in a wall and not all homes in the area have power. “So what we did is, we hooked up a car-like battery to the device and we were able to take it into people’s homes,” says Medina-Marino.

‘Acceptable and feasible’

A study lead by Medina-Marino, and recently published in Open Forum Infectious Diseases, set out to determine the acceptability and feasibility of in-home testing of household contacts of people with TB.

The study was conducted among 84 households in Duncan Village, a township in the Buffalo City Metropolitan Municipality in the Eastern Cape. The Metro had an estimated TB incidence of 876 cases per 100 000 population in 2019, according to the National Institute for Communicable Diseases. This number is much higher than the latest WHO estimate of 468  per 100 000 for South Africa as a whole.

From July 2018 to May 2019, people diagnosed with pulmonary TB were recruited from six government health clinics in the area. They were asked for permission to visit their homes to screen their household contacts for TB. Household contacts were verbally assessed for signs or symptoms of TB, including night sweats, weight loss, persistent cough and a fever.

Households where people had any signs or symptoms of TB were randomised to either be referred to a local clinic for TB testing or tested immediately in their home. Of the eighty-four randomised households, 51 household contacts were offered in-home testing. Everyone accepted the offer for in-home testing.

For the test with the GeneXpert Edge, Medina-Marino says household contacts had to produce a sputum sample. About 47% (24/51) were able to produce sputum. This was then mixed with a reagent containing the required components for a polymerase chain reaction test. This solution was then loaded into a disposable cartridge/test module and inserted into the Edge device. Results were available in about 90 minutes. Anyone who received a positive test result in their home were immediately referred to a clinic for TB treatment.

Regarding the 47 household contacts referred for testing at the clinic, only 15% (7 people) presented for clinic-based TB evaluation, 6 were tested, and 4 out of 6 returned for their results.

Ultimately, the study found that in-home testing of household contacts for TB was acceptable and feasible.

“It’s feasible. If you compare the rate of uptake of treatment versus the rate of uptake for testing, it looks like it’s performing much better when you do home based testing versus referral for testing at the clinic,” says Medina-Marino.

Risk of stigma?

Similar to when HIV home-based testing studies were carried out, Medina-Marino says prior to their study, community members expressed concerns about stigmatising houses that were visited. “[A] lot of people were saying: ‘If you go to people’s houses, you’re going to stigmatise the household.’”

But what they actually found was that people didn’t feel stigmatised. Household contacts of people with TB felt that coming to the house to test people brought a sense of security in the home. He adds that it was easy for people to believe the results because everything was done in front of them.

In instances where people didn’t have TB, Medina-Marino says household contacts were comforted that they didn’t have to be scared of the person tested. In instances where people did have TB, he says the attitude of household contacts was supportive to start treatment.

How the test compares to other tests

Apart from testing for TB, the GeneXpert Edge can also detect whether someone’s TB is resistant to rifampicin. This is one of the medicines in the standard four-drug combination used to treat TB.

Unlike the latest lab-base GeneXpert tests, the GeneXpert Edge does not detect resistance to any TB medicines other than rifampacin. “It is hard to fit the probes needed to detect other forms of resistance into the cartridge,” says study co-author Professor Grant Theron, head of the Clinical Mycobacteriology and Epidemiology Research group at Stellenbosch University’s Molecular Biology and Human Genetics Unit.

Theron notes that the sensitivity and specificity of GeneXpert Edge is similar to that of lab-based GeneXpert machines if the tests are done on specimens from the same type of patient and the same test cartridge. (High sensitivity means the likelihood of false negatives is low wile high specificity means the likelihood of false positives is low.)

Performance may however differ because of differences between people who test at home and people who test at the clinic. Theron explains that in their study they tested people who did not yet feel sick enough to go to get tested at the clinic. People who are sicker, and who are accordingly more likely to go to the clinic, are likely to have more pathogen in their sputum samples and be easier to diagnose.

‘A breakthrough for TB’

Home-based tests is a significant breakthrough in TB because of its crucial role in detecting cases early and enabling timely tracing and testing of household contacts, says Dr Ntokozo Mzimela, a lecturer in integrated pathology in the Faculty of Health Sciences at Nelson Mandela University.

She tells Spotlight it also offers several advantages over clinic-based tests. “They are highly accessible, facilitate mass testing, reduce the risk of disease transmission, and address patient reluctance by allowing testing in the comfort and privacy of one’s home.”

Mzimela adds the GeneXpert Edge and portable X-ray screening serve complementary roles in TB diagnosis. “While the X-ray reveals lung abnormalities, the Edge confirms the presence of TB bacteria. Both tools are essential and should be used in conjunction to provide comprehensive diagnostic insights and ensure accurate and timely treatment for patients,” she says.

Professor Keertan Dheda agrees that home-based testing could link up neatly with portable X-ray, but adds it is still too early to determine where home-based TB testing will fit into the country’s TB testing programme. Dheda heads up the Division of Pulmonology at Groote Schuur Hospital and the University of Cape Town.

“We don’t yet know whether testing everyone is the right approach or whether reflex testing based on chest x-ray abnormalities is the right approach,” Dheda says. “Now that feasibility has been established, it means that more studies can be undertaken, and operational research can be commenced.”

Further studies are already underway, Medina-Marino tells Spotlight.

He says the study in Duncan Village found that about 60% of household contacts who had TB symptoms could not cough up a sputum sample. His team therefore decided to combine in-home testing with an oral swab.

“So in the study that we’re doing now in households, we found an additional 12 people who cannot produce sputum but on their swab test, they showed a positive swab result. Tongue swabs increase yield of case finding among those unable to produce sputum,” he says.

Republished from Spotlight under a Creative Commons licence.

Read the original article

AI Models that can Identify Patient Demographics in X-rays are Also Unfair

Photo by Anna Shvets

Artificial intelligence models often play a role in medical diagnoses, especially when it comes to analysing images such as X-rays. But these models have been found not perform as well across all demographic groups, usually faring worse on women and people of colour.

These models have also been shown to develop some surprising abilities. In 2022, MIT researchers reported that AI models can make accurate predictions about a patient’s race from their chest X-rays – something that the most skilled radiologists can’t do.

Now, in a new study appearing in Nature, the same research team has found that the models that are most accurate at making demographic predictions also show the biggest “fairness gaps”, ie having reduced accuracy diagnosing images of people of different races or genders. The findings suggest that these models may be using “demographic shortcuts” when making their diagnostic evaluations, which lead to incorrect results for women, Black people, and other groups, the researchers say.

“It’s well-established that high-capacity machine-learning models are good predictors of human demographics such as self-reported race or sex or age. This paper re-demonstrates that capacity, and then links that capacity to the lack of performance across different groups, which has never been done,” says senior author Marzyeh Ghassemi, an MIT associate professor of electrical engineering and computer science.

The researchers also found that they could retrain the models in a way that improves their fairness. However, their approached to “debiasing” worked best when the models were tested on the same types of patients they were trained on, such as patients from the same hospital. When these models were applied to patients from different hospitals, the fairness gaps reappeared.

“I think the main takeaways are, first, you should thoroughly evaluate any external models on your own data because any fairness guarantees that model developers provide on their training data may not transfer to your population. Second, whenever sufficient data is available, you should train models on your own data,” says Haoran Zhang, an MIT graduate student and one of the lead authors of the new paper.

Removing bias

As of May 2024, the FDA has approved 882 AI-enabled medical devices, with 671 of them designed to be used in radiology. Since 2022, when Ghassemi and her colleagues showed that these diagnostic models can accurately predict race, they and other researchers have shown that such models are also very good at predicting gender and age, even though the models are not trained on those tasks.

“Many popular machine learning models have superhuman demographic prediction capacity – radiologists cannot detect self-reported race from a chest X-ray,” Ghassemi says. “These are models that are good at predicting disease, but during training are learning to predict other things that may not be desirable.”

In this study, the researchers set out to explore why these models don’t work as well for certain groups. In particular, they wanted to see if the models were using demographic shortcuts to make predictions that ended up being less accurate for some groups. These shortcuts can arise in AI models when they use demographic attributes to determine whether a medical condition is present, instead of relying on other features of the images.

Using publicly available chest X-ray datasets from Beth Israel Deaconess Medical Center (BIDMC) in Boston, the researchers trained models to predict whether patients had one of three different medical conditions: fluid buildup in the lungs, collapsed lung, or enlargement of the heart. Then, they tested the models on X-rays that were held out from the training data.

Overall, the models performed well, but most of them displayed “fairness gaps” – that is, discrepancies between accuracy rates for men and women, and for white and Black patients.

The models were also able to predict the gender, race, and age of the X-ray subjects. Additionally, there was a significant correlation between each model’s accuracy in making demographic predictions and the size of its fairness gap. This suggests that the models may be using demographic categorisations as a shortcut to make their disease predictions.

The researchers then tried to reduce the fairness gaps using two types of strategies. For one set of models, they trained them to optimise “subgroup robustness,” meaning that the models are rewarded for having better performance on the subgroup for which they have the worst performance, and penalised if their error rate for one group is higher than the others.

In another set of models, the researchers forced them to remove any demographic information from the images, using “group adversarial” approaches. Both strategies worked fairly well, the researchers found.

“For in-distribution data, you can use existing state-of-the-art methods to reduce fairness gaps without making significant trade-offs in overall performance,” Ghassemi says. “Subgroup robustness methods force models to be sensitive to mispredicting a specific group, and group adversarial methods try to remove group information completely.”

Not always fairer

However, those approaches only worked when the models were tested on data from the same types of patients that they were trained on, eg from BIDMC.

When the researchers tested the models that had been “debiased” using the BIDMC data to analyse patients from five other hospital datasets, they found that the models’ overall accuracy remained high, but some of them exhibited large fairness gaps.

“If you debias the model in one set of patients, that fairness does not necessarily hold as you move to a new set of patients from a different hospital in a different location,” Zhang says.

This is worrisome because in many cases, hospitals use models that have been developed on data from other hospitals, especially in cases where an off-the-shelf model is purchased, the researchers say.

“We found that even state-of-the-art models which are optimally performant in data similar to their training sets are not optimal – that is, they do not make the best trade-off between overall and subgroup performance – in novel settings,” Ghassemi says. “Unfortunately, this is actually how a model is likely to be deployed. Most models are trained and validated with data from one hospital, or one source, and then deployed widely.”

The researchers found that the models that were debiased using group adversarial approaches showed slightly more fairness when tested on new patient groups than those debiased with subgroup robustness methods. They now plan to try to develop and test additional methods to see if they can create models that do a better job of making fair predictions on new datasets.

The findings suggest that hospitals that use these types of AI models should evaluate them on their own patient population before beginning to use them, to make sure they aren’t giving inaccurate results for certain groups.

New Pulsatility Metric in Brain Blood Vessels for Studying Dementia

Photo by Anna Shvets on Pexels

Researchers from the Mātai Institute and the Auckland Bioengineering Institute have developed a new metric from measured blood circulation in the brain. The new metric opens up new research avenues for brain conditions, including Alzheimer’s disease and other forms of dementia. The research has been published in the leading research journal Scientific Reports Nature.

Each time the heart beats, it pumps blood through the brain vessels, causing them to expand slightly and then relax. This pulsation in the brain helps distribute blood evenly across different areas of the brain, ensuring that all parts receive the oxygen and nutrients they need to function properly.

In healthy vessels, the pulse wave is dampened before it reaches the smallest vessels, where high pulsatility could be harmful. The new metric provides a comprehensive measure of the small vessel pulsatility risk.

The new metric is based on 4D flow MRI technology, and is particularly crucial because increased vascular pulsatility is linked to several brain conditions, including Alzheimer’s disease and other forms of dementia.

By accurately measuring how pulsatility is transmitted in the brain, researchers can better understand the underlying mechanisms of these diseases and potentially guide the development of new treatments.

Current MRI methods face limitations due to anatomical variations and measurement constraints. The new technique removes this issue by integrating thousands of measurements across all brain vessels, rather than the traditional method of looking at one spot. This provides a richer metric representative of the entire brain.

“The ability to measure how pulsatility is transmitted through the brain’s arteries could revolutionise our approach to neurological diseases, and support research in vascular damage hypotheses,” says first author Sergio Dempsey.

“Our method allows for a detailed assessment of the brain’s vascular health, which is often compromised in neurodegenerative disorders.”

The study also highlighted the potential to enhance clinical assessments and research on brain health. By integrating this new metric into routine diagnostic procedures, healthcare providers can offer more precise and personalised care plans for individuals at risk of or suffering from cognitive impairments.

To make the most of the new metric’s implications for patient care, the researchers have made their tools publicly available, integrating them into pre-existing open-source software. This enables scientists and clinicians worldwide to adopt the advanced methodology, fostering further research and collaboration in the field of neurology.

Results from the initial study of the metric also identified important sex differences in vascular dynamics which has initiated a new study focusing on sex-related dynamics.

The research team is planning further studies to explore the applications of this technique in larger and more diverse populations.

Source: University of Auckland

AI Screening Could Boost Survival Rate for Hepatocellular Carcinoma from 20% to 90%

Photo by National Cancer Institute on Unsplash

A breakthrough study published in The American Journal of Pathology describes a new machine-learning model that may improve accuracy in early diagnosis of hepatocellular carcinoma and monitoring the impact of treatment.

Early diagnosis of hepatocellular carcinoma (HCC) – one of the most fatal malignancies – is crucial to improve patient survival. In this breakthrough study, investigators report on the development of a serum fusion-gene machine-learning model. This important screening tool may increase the five-year survival rate of patients with HCC from 20% to 90% because of its improved accuracy in early diagnosis of HCC and monitoring the impact of treatment.

HCC is the most common form of liver cancer and accounts for around 90% of cases. Currently, the most common screening test for the HCC biomarker, serum alpha-foetal protein, is not always accurate, and up to 60% of liver cancers are only diagnosed in advanced stages, resulting in a survival rate of only around 20%. 

Lead investigator Jian-Hua Luo, MD, PhD, Department of Pathology, High Throughput Genome Center, and Pittsburgh Liver Research Center, University of Pittsburgh School of Medicine, explained: “Early diagnosis of liver cancer helps save lives. However, most liver cancers occur insidiously and without many symptoms. This makes early diagnosis challenging. What we need is a cost-effective, accurate, and convenient test to screen early-stage liver cancer in human populations. We wanted to explore if a machine-learning approach could be used to increase the accuracy of screening for HCC based on the status of the fusion genes.”

In the search for a more effective and efficient diagnostic tool to predict non-HCC and HCC cases, investigators analysed a panel of nine fusion transcripts in serum samples from 61 patients with HCC and 75 patients with non-HCC conditions using real-time quantitative reverse transcription PCR (RT-PCR). Seven of the nine fusions were frequently detected in HCC patients. The researchers generated machine-learning models based on serum fusion-gene levels to predict HCC in the training cohort, using the leave-one-out cross-validation approach.  

A four fusion gene logistic regression model produced an accuracy of 83% to 91% in predicting the occurrence of HCC. When combined with serum alpha-foetal protein, the two-fusion gene plus alpha-foetal protein logistic regression model produced 95% accuracy for all the cohorts. Furthermore, quantification of fusion gene transcripts in the serum samples accurately assessed the impact of the treatment and was able to monitor for the recurrence of the cancer. 

Dr. Luo commented, “The fusion gene machine-learning model significantly improves the early detection rate of HCC over the serum alpha-fetal protein alone. It may serve as an important tool in screening for HCC and in monitoring the impact of HCC treatment. This test will find patients who are likely to have HCC.”

Dr. Luo concluded, “Early treatment of liver cancer has a 90% five-year survival rate, while late treatment has only 20%. The alternative to this test is to subject every individual with some risk of liver cancer to imaging analysis every six months, which is very costly and ineffective. In addition, when imaging results are ambiguous, this test will help to differentiate malignant versus benign lesions.”

Source: Elsevier

New Biomarker Database for Astronaut Health may be Useful to Earthlings

Photo: Pixabay CC0

As space travel becomes more frequent, a new biomarker tool was developed by an international team of researchers to help improve the growing field of aerospace medicine and the health of astronauts.

Dr Guy Trudel (Professor in the Faculty of Medicine), Odette Laneuville (Associate Professor, Faculty of Science, and Director of the Biomedical Sciences) and Dr Martin Pelchat (Associate Professor in the Department of Biochemistry, Microbiology and Immunology) are among the contributors to an international study led by Eliah Overbey of Weill Cornell Medicine and the University of Austin. Published today in Nature it introduces the Space Omics and Medical Atlas (SOMA), a database of integrated data and sample repository from a diverse range of space missions, including from SpaceX and NASA.

Space travel creates cellular, molecular, and physiological shifts in astronauts. SOMA is expected to provide a much necessary biomedical profiling that can help tease out the short and long-term health impacts of spaceflight. This will bring needed health monitoring, risk mitigation, and countermeasures baseline data for upcoming lunar, Mars, and exploration-class missions. It is meant to help keep astronauts and space travellers alive and healthy.

It may also have some intended use here on Earth.

“This represents a breakthrough in the study of human adaptation and life in space. Since many of the changes in astronaut in space resemble those of people who are immobile in bed, these studies can be clinically relevant. The data are therefore important for future space exploration while also providing a correlation to people on Earth with limited mobility or who are bedridden before their rehabilitation,” says Dr Trudel, a rehabilitation physician and researcher at The Ottawa Hospital who has focused on space travel and its effects on the human immune system.

Highlights of the study, include:

  • The Atlas includes extensive molecular and physiological profiles encompassing genomics, epigenomics, transcriptomics, proteomics, metabolomics, and microbiome data sets, which reveal some consistent features across missions.
  • Samples were taken pre-flight, during, post-flight and throughout the recovery period.
  • Comprehensive profile of the physiological changes of the I4 crew (ages 29, 38, 42, 51) and 13 unique biospecimen sample types were collected and processed.
  • 2911 samples were banked with over 1000 samples processed for sequencing, imaging, and biochemical analysis creating the first-ever aerospace medicine biobank.
  • The SOMA resource represents an over 10-fold increase in total publicly available human space omics data.

“The University of Ottawa’s Faculty of Medicine, its Faculty of Science, and The Ottawa Hospital’s Bone and Joint Research laboratory have a long history of contributions and successes in studying human adaptation to space. They also involve students from different programs, providing a unique learning experience in both bone and joint health, and in the rapidly developing field of aerospace medicine,” adds Dr Trudel.

Source: University of Ottawa

New Blood Test for Ischaemic Stroke is a ‘Game-changer’

Ischaemic and haemorrhagic stroke. Credit: Scientific Animations CC4.0

A new study led by investigators from Brigham and Women’s Hospital has developed a new test by combining blood-based biomarkers with a clinical score to identify patients experiencing large vessel occlusion (LVO) stroke with high accuracy. Their results are published in the journal Stroke: Vascular and Interventional Neurology.

“We have developed a game-changing, accessible tool that could help ensure that more people suffering from stroke are in the right place at the right time to receive critical, life-restoring care,” said senior author Joshua Bernstock, MD, PhD, MPH, a clinical fellow in the Department of Neurosurgery at Brigham and Women’s Hospital.

Most strokes are ischaemic, in which blood flow to the brain is obstructed. LVO strokes are an aggressive type of ischaemic stroke that occurs when an obstruction occurs in a major artery in the brain, causing brain cells to rapidly die off from lack of oxygen. Major medical emergencies, LVO strokes require the swift treatment with mechanical thrombectomy, a surgical procedure that retrieves the blockage.

“Mechanical thrombectomy has allowed people that otherwise would have died or become significantly disabled be completely restored, as if their stroke never happened,” said Bernstock. “The earlier this intervention is enacted, the better the patient’s outcome is going to be. This exciting new technology has the potential to allow more people globally to get this treatment faster.”

The research team previously targeted two specific proteins found in capillary blood, one called glial fibrillary acidic protein (GFAP), which is also associated with brain bleeds and traumatic brain injury, and one called D-dimer. In this study, they demonstrated that the levels of these blood-based biomarkers combined with field assessment stroke triage for emergency destination (FAST-ED) scores could identify LVO ischaemic strokes while ruling out other conditions such as bleeding in the brain. Brain bleeds cause similar symptoms to LVO stroke, making them hard to distinguish from one another in the field, yet treatment for each is vastly different.

In this prospective, observational diagnostic accuracy study, the researchers looked at data from a cohort of 323 patients coded for stroke in Florida between May 2021 and August 2022. They found that combining the levels of the biomarkers GFAP and D-dimer with FAST-ED data less than six hours from the onset of symptoms allowed the test to detect LVO strokes with 93% specificity and 81% sensitivity. Other findings included that the test ruled out all patients with brain bleeds, suggesting that it may also eventually be used to detect intracerebral haemorrhage in the field.

Bernstock’s team also sees promising potential future use of this accessible diagnostic tool in low- and middle-income countries, where advanced imaging is not always available. It might also be useful in assessing patients with traumatic brain injuries. Next, they are carrying out another prospective trial to measure the test’s performance when used in an ambulance. They have also designed an interventional trial that leverages the technology to expedite the triage of stroke patients by having them bypass standard imaging and move directly to intervention.

“In stroke care, time is brain,” Bernstock said. “The sooner a patient is put on the right care pathway, the better they are going to do. Whether that means ruling out bleeds or ruling in something that needs an intervention, being able to do this in a prehospital setting with the technology that we built is going to be truly transformative.

Source: Brigham and Women’s Hospital

Flexible Microdisplay Enables Real-time Visualisation in Neurosurgery

The device represents a huge leap ahead guide neurosurgeons with visualised brain activity

The device’s LEDs can light up in several colors. This allows surgeons to see which areas they need to operate on. It allows them to track brain states during surgery, including the onset of epileptic seizures. Credit: UCSF

A thin film that combines an electrode grid and LEDs can both track and produce a visual representation of the brain’s activity in real-time during surgery-a huge improvement over the current state of the art. The device is designed to provide neurosurgeons visual information about a patient’s brain to monitor brain states during surgical interventions to remove brain lesions including tumours and epileptic tissue.

The team behind the device describes their work in the journal Science Translational Medicine.

Each LED in the device represents the activity of a few thousand neurons. In a series of proof-of-concept experiments in rodents and large non-primate mammals, researchers showed that the device can effectively track and display neural activity in the brain corresponding to different areas of the body. In this case, the LEDs developed by the team light up red in the areas that need to be removed by the surgeon. Surrounding areas that control critical functions and should be avoided show up in green.

The study also showed that the device can visualise the onset and map the propagation of an epileptic seizure on the surface of the brain. This would allow physicians to isolate the ‘nodes’ of the brain that are involved in epilepsy. It also would allow physicians to deliver necessary treatment by removing tissue or by using electrical pulses to stimulate the brain.

“Neurosurgeons could see and stop a seizure before it spreads, view what brain areas are involved in different cognitive processes, and visualise the functional extent of tumour spread. This work will provide a powerful tool for the difficult task of removing a tumour from the most sensitive brain areas,” said Daniel Cleary, one of the study’s coauthors, a neurosurgeon and assistant professor at Oregon Health and Science University.

The device was conceived and developed by a team of engineers and physicians from University of California San Diego and Massachusetts General Hospital (MGH) and was led by Shadi Dayeh, the paper’s corresponding author and a professor in the Department of Electrical and Computer Engineering at UC San Diego.

Protecting critical brain functions

During brain surgery, physicians need to map brain function to define which areas of the organ control critical functions and can’t be removed. Currently, neurosurgeons work with a team of electrophysiologists during the procedure. But that team and their monitoring equipment are located in a different part of the operating room.

Brain areas that need to be protected and those that need to be operated on are either marked by electrophysiologists on a paper that is brought to the surgeon or communicated verbally to the surgeon, who then places sterile papers on the brain surface to mark these regions.

“Both are inefficient ways of communicating critical information during a procedure, and could impact its outcomes,” said Dr Angelique Paulk of MGH, who is a co-author and co-inventor of the technology.

In addition, the electrodes currently used to monitor brain activity during surgery do not produce detailed fine grained data. So surgeons need to keep a buffer zone, known as resection margin, of 5 to 7mm around the area they are removing inside the brain.

This means that they might leave some harmful tissue in. The new device provides a level of detail that would shrink this buffer zone to less than 1mm.

“We invented the brain microdisplay to display with precision critical cortical boundaries and to guide neurosurgery in a cost-effective device that simplifies and reduces the time of brain mapping procedures,” said Shadi Dayeh, the paper’s corresponding author and a professor in the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering.

Researchers installed the LEDs on top of another innovation from the Dayeh lab, the platinum nanorod electrode grid (PtNRGrid). Using the PtNRGrids since 2019, Dayeh’s team pioneered human brain and spinal cord mapping with thousands of channels to monitor brain neural activity.

They reported early safety and effectiveness results in a series of articles in Science Translational Medicine in 2022 in tens of human subjects.

(New sensor grids record human brain signals with record breaking resolution and Microelectrode array can enable safer spinal cord surgery) — ahead of Neuralink and other companies in this space.

The PtNRGrid also includes perforations, which enable physicians to insert probes to stimulate the brain with electrical signals, both for mapping and for therapy.

How it’s made

The display uses gallium nitride-based micro-LEDs, bright enough to be seen under surgical lights. The two models built measures 5mm or 32mm on a side, with 1024 or 2048 LEDs. They capture brain activity at 20 000 samples a second, enabling .

“This enables precise and real-time displays of cortical dynamics during critical surgical interventions,” said Youngbin Tchoe, the first author and co-inventor, formerly a postdoc in the Dayeh group at UC San Diego and now an assistant professor at Ulsan National Institute of Science and Technology.

In addition to the LEDs, the device includes acquisition and control electronics as well as software drivers to analyse and project cortical activity directly from the surface of the brain.

“The brain iEEG-microdisplay can impressively both record the activity of the brain to a very fine degree and display this activity for a neurosurgeon to use in the course of surgery. We hope that this device will ultimately lead to better clinical outcomes for patients with its ability to both reveal and communicate the detailed activity of the underlying brain during surgery,” said study coauthor Jimmy Yang, a neurosurgeon and assistant professor at The Ohio State University.

Next steps

Dayeh’s team is working to build a microdisplay that will include 100 000 LEDs, with a resolution equivalent to that of a smartphone screen – for a fraction of the cost of a high-end smartphone. Each LED in those displays would reflect the activity of a few hundred neurons.

These brain microdisplays would also include a foldable portion. This would allow surgeons to operate within the foldable portion and monitor the impact of the procedure as the other, unfolded portion of the microdisplay shows the status of the brain in real time.

Researchers are also working on one limitation of the study – the close proximity of the LED sensors and the PtNRGrids led to a slight interference and noise in the data.

The team plans to build customised hardware to change the frequency of the pulses that turn on the LEDs to make it easier to screen out that signal, which is not relevant to the brain’s electrical activity.

Source: University of California San Francisco

Could Diamond Dust Replace Gadolinium in MRI?

Photo by Mart Production on Pexels

An unexpected discovery surprised a scientist at the Max Planck Institute for Intelligent Systems in Stuttgart: nanometre-sized diamond particles, which were intended for a completely different purpose, shone brightly in a magnetic resonance imaging experiment – outshining the actual contrast agent, the heavy metal gadolinium.

The researchers, publishing their serendipitous discovery in Advanced Materials, believe that diamond nanoparticles, in addition to their use in drug delivery to treat tumour cells, might one day become a novel MRI contrast agent.

While the discovery of diamond dust’s potential as a future MRI contrast agent may never be considered a turning point in science history, its signal-enhancing properties are nevertheless an unexpected finding which may open-up new possibilities: diamond dust glows brightly even after days of being injected.

Perhaps it could replace gadolinium, which has been used in clinics to enhance the brightness of tissues to detect tumours, inflammation, or vascular abnormalities for more than 30 years. But when injected into a patient’s bloodstream, gadolinium travels not only to tumour tissue but also to surrounding healthy tissue. It is retained in the brain and kidneys, persisting months to years after the last administration and its long-term effects are not yet known. Gadolinium also causes a number of other side effects, and the search for an alternative has been going on for years.

Serendipity often advances science

Could diamond dust, a carbon-based material, become a well-tolerable alternative because of an unexpected discovery made in a laboratory at the Max Planck Institute for Intelligent Systems in Stuttgart?

Dr Jelena Lazovic Zinnanti was working on an experiment using nanometre-sized diamond particles for an entirely different purpose. The research scientist, who heads the Central Scientific Facility Medical Systems at MPI-IS, was surprised when she put the 3–5nm particles into tiny drug-delivery capsules made of gelatin. She wanted these capsules to rupture when exposed to heat. She assumed that diamond dust, with its high heat capacity, could help.

“I had intended to use the dust only to heat up the drug carrying capsules,” Jelena recollects.

“I used gadolinium to track the dust particles’ position. I intended to learn if the capsules with diamonds inside would heat up better. While performing preliminary tests, I got frustrated, because gadolinium would leak out of the gelatin – just as it leaks out of the bloodstream into the tissue of a patient. I decided to leave gadolinium out. When I took MRI images a few days later, to my surprise, the capsules were still bright. Wow, this is interesting, I thought! The diamond dust seemed to have better signal enhancing properties than gadolinium. I hadn’t expected that.”

Jelena took these findings further by injecting the diamond dust into live chicken embryos. She discovered that while gadolinium diffuses everywhere, the diamond nanoparticles stayed in the blood vessels, didn’t leak out and later shone brightly in the MRI, just as they had done in the gelatin capsules.

While other scientists had published papers showing how they used diamond particles attached to gadolinium for magnetic resonance imaging, no one had ever shown that diamond dust itself could be a contrast agent. Two years later, Jelena became the lead author of a paper now published in Advanced Materials.

“Why the diamond dust shines bright in our MRI still remains a mystery to us,” says Jelena.

She can only assume the reason is the dust’s magnetic properties: “I think the tiny particles have carbons that are slightly paramagnetic. The particles may have a defect in their crystal lattice, making them slightly magnetic. That’s why they behave like a T1 contrast agent such as gadolinium. Additionally, we don’t know whether diamond dust could potentially be toxic, something that needs to be carefully examined in the future.”

Source: Max Planck Institute for Intelligent Systems