Tag: artificial intelligence

Is AI in Medicine Playing Fair?

Photo by Christina Morillo

As artificial intelligence (AI) rapidly integrates into health care, a new study by researchers at the Icahn School of Medicine at Mount Sinai reveals that all generative AI models may recommend different treatments for the same medical condition based solely on a patient’s socioeconomic and demographic background.  

Their findings, which are detailed in the April 7, 2025 online issue of Nature Medicine, highlight the importance of early detection and intervention to ensure that AI-driven care is safe, effective, and appropriate for all.

As part of their investigation, the researchers stress-tested nine large language models (LLMs) on 1,000 emergency department cases, each replicated with 32 different patient backgrounds, generating more than 1.7 million AI-generated medical recommendations. Despite identical clinical details, the AI models occasionally altered their decisions based on a patient’s socioeconomic and demographic profile, affecting key areas such as triage priority, diagnostic testing, treatment approach, and mental health evaluation. 

“Our research provides a framework for AI assurance, helping developers and health care institutions design fair and reliable AI tools,” says co-senior author Eyal Klang, MD, Chief of Generative-AI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai. “By identifying when AI shifts its recommendations based on background rather than medical need, we inform better model training, prompt design, and oversight. Our rigorous validation process tests AI outputs against clinical standards, incorporating expert feedback to refine performance. This proactive approach not only enhances trust in AI-driven care but also helps shape policies for better health care for all.” 

One of the study’s most striking findings was the tendency of some AI models to escalate care recommendations—particularly for mental health evaluations—based on patient demographics rather than medical necessity. In addition, high-income patients were more often recommended advanced diagnostic tests such as CT scans or MRI, while low-income patients were more frequently advised to undergo no further testing. The scale of these inconsistencies underscores the need for stronger oversight, say the researchers. 

While the study provides critical insights, researchers caution that it represents only a snapshot of AI behavior.  Future research will continue to include assurance testing to evaluate how AI models perform in real-world clinical settings and whether different prompting techniques can reduce bias. The team also aims to work with other health care institutions to refine AI tools, ensuring they uphold the highest ethical standards and treat all patients fairly. 

“I am delighted to partner with Mount Sinai on this critical research to ensure AI-driven medicine benefits patients across the globe,” says physician-scientist and first author of the study, Mahmud Omar, MD, who consults with the research team. “As AI becomes more integrated into clinical care, it’s essential to thoroughly evaluate its safety, reliability, and fairness. By identifying where these models may introduce bias, we can work to refine their design, strengthen oversight, and build systems that ensure patients remain at the heart of safe, effective care. This collaboration is an important step toward establishing global best practices for AI assurance in health care.” 

“AI has the power to revolutionize health care, but only if it’s developed and used responsibly,” says co-senior author Girish N. Nadkarni, MD, MPH, Chair of the Windreich Department of Artificial Intelligence and Human Health Director of the Hasso Plattner Institute for Digital Health, and the Irene and Dr. Arthur M. Fishberg Professor of Medicine, at the Icahn School of Medicine at Mount Sinai. “Through collaboration and rigorous validation, we are refining AI tools to uphold the highest ethical standards and ensure appropriate, patient-centered care. By implementing robust assurance protocols, we not only advance technology but also build the trust essential for transformative health care. With proper testing and safeguards, we can ensure these technologies improve care for everyone—not just certain groups.” 

Next, the investigators plan to expand their work by simulating multistep clinical conversations and piloting AI models in hospital settings to measure their real-world impact. They hope their findings will guide the development of policies and best practices for AI assurance in health care, fostering trust in these powerful new tools. 

Source: The Mount Sinai Hospital / Mount Sinai School of Medicine

First of its Kind Collaborative Report Unveils the Transformative Role of Artificial Intelligence and Data Science in Advancing Global Health in Africa

April 2nd, 2025, Nairobi, Kenya – Africa stands at the forefront of a revolutionary shift in global health, driven by artificial intelligence (AI) and data science, according to a report released today from the Science for Africa Foundation (SFA Foundation), African institutions and research councils. The report is a first of its kind to comprehensively examine national-level perspectives across Africa on AI and data science for global health. The landscape presents an unprecedented view into the potential to improve AI governance in Africa to reduce the risk and stop the perpetuation of inequity.

TitledGovernance of Artificial Intelligence for Global Health in Africa”, the report is produced through the SFA Foundation’s Science Policy Engagement with Africa’s Research (SPEAR) programme as a culmination of a year-long effort involving convenings across Africa’s five regions, policy analysis and extensive surveys to identify policy gaps and opportunities in AI and data science for global health. Grounded in consultations across 43 African countries, the report incorporates insights from over 300 stakeholders, ensuring a comprehensive and inclusive approach to its findings.

The global AI governance framework remains ill-suited to Africa’s unique needs and priorities,” said Prof. Tom Kariuki, Chief Executive Officer of the SFA Foundation. “Our report on AI in global health and data sciences champions a shift towards frameworks that reflect Africa’s context, ensuring ethical, equitable, and impactful applications of AI not only for our continent’s health challenges, but also to advance global health.”

Key findings and opportunities

The report identifies key trends, gaps, and opportunities in AI and data science for health across Africa:

  • Increasing national investments: Countries including Mauritius, Nigeria, Malawi, Ethiopia, Ghana, Rwanda, Senegal, and Tunisia have launched national AI programmes, while at least 39 African countries are actively pursuing AI R&D. Initiatives such as Rwanda’s Seed Investment Fund and Nigeria’s National Centre for AI and Robotics illustrate promising investments in AI startups.
  • Need for health-specific AI governance: Despite growing interest, there is a critical gap in governance frameworks tailored to health AI across Africa. While health is prioritised in AI discussions, specific frameworks for responsible deployment in health are still underdeveloped.
  • Inclusive AI policy development: Many existing AI policies lack gender and equity considerations. Closing these gaps is essential to prevent inequalities in access to AI advancements and health outcomes.

Incorporating AI into healthcare is not just about technology—it is about enhancing our policy frameworks to ensure these advancements lead to better health outcomes for all Africans,” added Dr Uzma Alam, Programme Lead of the Science Policy Engagement with Africa’s Research (SPEAR) programme.

  • There are existing policy frameworks on which to build and/or consolidate governing of responsible AI and data science: At least 35 African countries have national STI and ICT as well as health research and innovation policy frameworks that contain policies applicable to the development and deployment of AI and data science.
  • There is a surge in African research on health AI and data science (big data): raising the need for equitable North-South R&D partnerships.

Recommendations and way forward

The report is expected to act as a catalyst for integrating AI into health strategies across the continent, marking a significant step forward in Africa’s journey toward leadership in global health innovation by calling for:

  • Adaptive and Inclusive AI Governance: The report calls for the integration of diverse perspectives spanning gender, urban-rural dynamics, and indigenous knowledge into AI health governance frameworks. It highlights the need for adaptive policies that balance innovation with equitable access, while leveraging regional collaboration and supporting the informal sector.
  • Innovative Funding and African Representation: Recognising the potential of local knowledge and practices, the report advocates for creative funding models to bolster AI research and development. It emphasises connecting the informal sector to markets and infrastructure to encourage grassroots innovation.
  • The Reinforcement of Science Diplomacy: To position Africa as a key player in global AI governance, the report recommends investing in programmes that align AI technologies with Africa’s health priorities. It also stresses the importance of amplifying Africa’s voice in shaping international standards and agreements through robust science-policy collaboration.
  • The Bridging of Gendered digital divide: To bridge the gendered digital divide in Africa. targeted initiatives are needed to address regional disparities and ensure gender inclusivity in the AI ecosystem. It’s essential to focus on programs that build capacity and improve access to resources. 

“The report clearly outlines pathways for leveraging AI to bridge gaps and overcome current capacity constraints, while strengthening Africa’s role as a leader in shaping global health policy,” said Dr Evelyn Gitau, Chief Scientific Officer at the SFA Foundation. “This initiative showcases Africa’s potential to lead, innovate, and influence the global health ecosystem through AI.

We envision a world where AI advances health outcomes equitably, benefiting communities around the world. The Science for Africa Foundation’s report brings this vision to life by providing clarity on policy frameworks of AI and data science in global health. This empowers African voices to shape AI policy – not only directing healthcare innovation but setting a precedent for inclusive AI governance across sectors.” – Vilas Dhar, President of the Patrick J. McGovern Foundation.

Access the Report here: https://bit.ly/4jhzMFs

The Future of Healthcare Interoperability: Building a Stronger Foundation for Data Integration

Henry Adams, Country Manager South Africa, InterSystems

Healthcare data is one of the most complex and valuable assets in the modern world. Yet, despite the wealth of digital health information being generated daily, many organisations still struggle to access, integrate, and use it effectively. The promise of data-driven healthcare – where patient records, research insights, and operational efficiencies seamlessly come together – remains just that: a promise. The challenge lies in interoperability.

For years, healthcare institutions have grappled with fragmented systems, disparate data formats, and evolving regulatory requirements. The question is no longer whether to integrate but how best to do it. Should healthcare providers build, rent, or buy their data integration solutions? Each approach has advantages and trade-offs, but long-term success depends on choosing a solution that balances control, flexibility, and cost-effectiveness.

Why Interoperability Remains a Challenge

Despite significant advancements in standardisation, interoperability remains a persistent challenge in healthcare. A common saying in the industry – “If you’ve seen one HL7 interface, you’ve seen one HL7 interface” – illustrates the lack of uniformity across systems. Even FHIR, the latest interoperability standard, comes with many extensions and custom implementations, leading to inconsistency.

Henry Adams, Country Manager South Africa, InterSystems

Adding to this complexity, healthcare data must meet strict security, privacy, and compliance requirements. The need for real-time data exchange, analytics, and artificial intelligence (AI) further increases the pressure on organisations to implement robust, scalable, and future-proof integration solutions.

The Build, Rent, or Buy Dilemma

When organisations decide how to approach interoperability, they typically weigh three options:

  • Building a solution from scratch offers full control but comes with high development costs, lengthy implementation timelines, and ongoing maintenance challenges. Ensuring compliance with HL7, FHIR, and other regulatory standards requires significant resources and expertise.
  • Renting an integration solution provides quick deployment at a lower initial cost but can lead to vendor lock-in, limited flexibility, and escalating costs as data volumes grow. Additionally, outsourced solutions may not prioritise healthcare-specific requirements, creating potential risks for compliance, security, and scalability.
  • Buying a purpose-built integration platform strikes a balance between control and flexibility. Solutions like InterSystems Health Connect and InterSystems IRIS for Health offer pre-built interoperability features while allowing organisations to customise and scale their integration as needed.

The Smart Choice: Owning Your Integration Future

To remain agile in an evolving healthcare landscape, organisations must consider the long-term impact of their integration choices. A well-designed interoperability strategy should allow for:

  • Customisation without complexity – Organisations should be able to tailor their integration capabilities without having to build from the ground up. This ensures they can adapt to new regulatory requirements and technological advancements.
  • Scalability without skyrocketing costs – A robust data platform should enable growth without the exponential cost increases often associated with rented solutions.
  • Security and compliance by design – Healthcare providers cannot afford to compromise on data privacy and security. A trusted interoperability partner should offer built-in compliance with international standards.

Some healthcare providers opt for platforms that combine pre-built interoperability with the flexibility to scale and customise as needed. For example, solutions designed to support seamless integration with electronic health records (EHRs), medical devices, and other healthcare systems can offer both operational efficiency and advanced analytics capabilities. The key is selecting an approach that aligns with both current and future needs, ensuring data remains accessible, secure, and actionable.

Preparing for the Future of Healthcare IT

As healthcare systems become more digital, the need for efficient, secure, and adaptable interoperability solutions will only intensify. The right integration strategy can determine whether an organisation thrives or struggles with inefficiencies, rising costs, and regulatory risks.

By choosing an approach that prioritises flexibility, control, and future-readiness, healthcare providers can unlock the full potential of their data – improving patient outcomes, driving operational efficiencies, and enabling innovation at scale.

The question isn’t just whether to build, rent, or buy – but how to create a foundation that ensures long-term success in healthcare interoperability.

AI-driven Telemedicine: Overcoming Adoption Barriers in Africa

Photo by Christina Morillo: https://www.pexels.com/photo/software-engineer-standing-beside-server-racks-1181354/

Artificial Intelligence (AI) is reshaping healthcare globally, and Africa stands to benefit immensely. AI-driven telemedicine is revolutionising access to care, offering innovative solutions to overcome healthcare challenges across the continent. From remote diagnostics to virtual consultations, AI is enhancing medical services, improving efficiency, and ultimately making healthcare more accessible to millions.

Understanding Telemedicine and AI

Telemedicine leverages telecommunications technology to provide remote healthcare services. It includes virtual consultations, remote patient monitoring, electronic health records, and AI-powered diagnostics. AI, through machine learning and natural language processing, analyses vast amounts of data rapidly, identifies patterns, and provides valuable insights. With AI doing the heavy lifting in healthcare, medical professionals can focus on patient care while benefiting from advanced decision-making support.

The Importance of Healthcare Access in Africa

The World Health Organization (WHO) estimates that over 60% of Africans lack access to essential healthcare services. A shortage of healthcare professionals and inadequate infrastructure exacerbates this challenge.

In South Africa alone, 50 million people rely on state healthcare, making cost-effective, high-quality solutions a necessity. Addressing healthcare access issues is crucial for improving public health, reducing mortality rates, and enhancing overall well-being.

The Role of AI in Telemedicine

AI-driven tools are enhancing medical diagnostics, improving accuracy and efficiency. For example, AI algorithms can analyse imaging scans, such as X-rays and MRIs, to detect conditions like tuberculosis and cancer. In South Africa, AI solutions developed by Qure.ai and EnvisionIT have demonstrated remarkable accuracy in interpreting chest X-rays, often surpassing general radiologists in detecting tuberculosis.

Velocity Skin Scanning further enables rapid dermatological screenings, providing timely and accurate diagnoses.

AI-powered chatbots, such as those used in Ghana’s mPharma initiative, assist in symptom assessment, medication stock predictions, and patient guidance. Virtual consultation platforms like DabaDoc in Nigeria and CareFirst, Unu Health, and Hello Doctor in South Africa enable seamless patient-doctor interactions, particularly in underserved areas. AI streamlines these services, ensuring better patient screening, appointment scheduling, and treatment accuracy.

Challenges in AI-Driven Telemedicine Adoption

Many African regions face limited internet connectivity, device accessibility issues, and electricity shortages, hindering telemedicine implementation. Satellite internet solutions, such as Starlink and solar-powered connectivity, present potential solutions.

Supportive regulatory frameworks are crucial for AI-driven healthcare success. Governments must develop policies that encourage innovation while safeguarding patient data. Collaborative efforts between policymakers and tech companies can facilitate AI integration into healthcare systems. The African Medical Council (AMCOA) plays a key role in shaping such regulations.

Educating healthcare professionals on AI technologies is essential for effective implementation. Upskilling programs empower medical staff to utilise AI tools efficiently. Additionally, cultural acceptance of telemedicine varies, making community outreach and education initiatives vital for overcoming skepticism.

Technology costs often pose adoption challenges, particularly when solutions are not developed locally. However, virtual primary healthcare services are cost-effective and can serve as an entry point for widespread AI adoption. Strategies to enhance affordability include subscription models, public education, media promotion, healthcare practitioner reimbursement, cross-border medical registration, and economic incentives for AI adoption.

AI-Driven Solutions in Practice

CareFirst offers on-demand virtual doctor consultations, available 24/7 via video calls, telephonic consultations, and AI-driven vital scans.

Patients can access AI-driven vital scans to measure stress levels, blood pressure, heart rate, respiratory rate, glucose levels (HbA1C), and oxygen saturation. These tools provide real-time health insights, aiding proactive healthcare management.

Powered by Belle AI and endorsed by WHO, this AI-driven technology enables real-time dermatological assessments, facilitating early detection of skin conditions.

ER Consulting Inc. has adopted Scribe MD to improve medical record-keeping. This AI solution reduces doctor burnout, enhances patient interactions, lowers documentation time, mitigates medicolegal risks, and improves clinical data analysis.

Conclusion

AI-driven telemedicine has the potential to revolutionise healthcare accessibility in Africa. By addressing critical adoption barriers, fostering collaborations between governments, tech companies, and healthcare organizations, and leveraging AI-powered innovations, we can create a more connected, efficient, and inclusive healthcare ecosystem.

The future of healthcare in Africa is digital, and AI is paving the way toward a healthier, more accessible future for all.

AI Boosts Efficacy of Cancer Treatment, but Doctors Remain Key

Photo by Tara Winstead on Pexels

A new study led by researchers from Moffitt Cancer Center, in collaboration with investigators from the University of Michigan,  shows that artificial intelligence (AI) can help doctors make better decisions when treating cancer. However, it also highlights challenges in how doctors and AI work together. The study, published in Nature Communications, focused on AI-assisted radiotherapy for non-small cell lung cancer and hepatocellular carcinoma.

Radiotherapy is a common treatment for cancer that uses high-energy radiation to kill or shrink tumors. The study looked at a treatment approach known as knowledge-based response-adaptive radiotherapy (KBR-ART). This method uses AI to optimize treatment outcomes by suggesting treatment adjustments based on how well the patient responds to the therapy.

The study found that when doctors used AI to help decide the best treatment plan, they made more consistent choices, reducing differences between doctors’ decisions. However, the technology didn’t always change doctors’ minds. In some cases, doctors disagreed with the AI suggested and made treatment decisions based on their experience and patient needs.

Doctors were asked to make treatment decisions for cancer patients, first without any technological assistance, and then with the help of AI. The AI system developed by the researchers uses patient data like medical imaging and test results to recommend changes in radiation doses. While some doctors found the suggestions helpful, others preferred to rely on their own judgment.

“While AI offers insights based on complex data, the human touch remains crucial in cancer care,” said Moffitt’s Issam El Naqa, PhD. “Every patient is unique, and doctors must make decisions based on both AI recommendations and their own clinical judgment.”

The researchers noted that while AI can be a helpful tool, doctors need to trust it for it to work well. Their study found that doctors were more likely to follow AI suggestions when they felt confident in its recommendations. “Our research shows that AI can be a powerful tool for doctors,” said Dipesh Niraula, PhD, an applied research scientist in Moffitt’s Machine Learning Department. “But it’s important to recognise that AI works best when it’s used as a support, not a replacement, for human expertise. Doctors bring their expertise and experience to the table, while AI provides data-driven insights. Together, they can make better treatment plans, but it requires trust and clear communication.”

The study’s authors hope that their findings can lead to better integration of AI tools and collaborative relationships that doctors can use to make more personalised treatment decisions for cancer patients. They also plan to further investigate how AI can support doctors in other medical fields.

Source: H. Lee Moffitt Cancer Center & Research Institute

Healthcare Trends to Watch in 2025

AI image made with Gencraft using Quicknews’ prompts.

Quicknews takes a look at some of the big events and concerns that defined healthcare 2024, and looks into its crystal ball identify to new trends and emerging opportunities from various news and opinion pieces. There’s a lot going on right now: the battle to make universal healthcare a reality for South Africans, growing noncommunicable diseases and new technologies and treatments – plus some hope in the fight against HIV and certain other diseases.

1. The uncertainty over NHI will continue

For South Africa, the biggest event in healthcare was the signing into law of the National Health Insurance (NHI) by President Ramaphosa in May 2024, right before the elections. This occurred in the face of stiff opposition from many healthcare associations. It has since been bogged down in legal battles, with a section governing the Certificate of Need to practice recently struck down by the High Court as it infringed on at least six constitutional rights.

Much uncertainty around the NHI has been expressed by various organisation such as the Health Funders Association (HFA). Potential pitfalls and also benefits and opportunities have been highlighted. But the biggest obstacle of all is the sheer cost of the project, estimated at some R1.3 trillion. This would need massive tax increases to fund it – an unworkable solution which would see an extra R37 000 in payroll tax. Modest economic growth of around 1.5% is expected for South Africa in 2025, but is nowhere near creating enough surplus wealth to match the national healthcare of a country like Japan. And yet, amidst all the uncertainty, the healthcare sector is expected to do well in 2025.

Whether the Government of National Unity (GNU) will be able to hammer out a workable path forward for NHI remains an open question, with various parties at loggerheads over its implementation. Public–private partnerships are preferred by the DA and groups such as Solidarity, but whether the fragile GNU will last long enough for a compromise remains anybody’s guess.

It is reported that latest NHI proposal from the ANC includes forcing medical aid schemes to lower their prices by competing with government – although Health Minister Aaron Motsoaledi has dismissed these reports. In any case, medical aid schemes are already increasing their rates as healthcare costs continue to rise in what is an inexorable global trend – fuelled in large part by ageing populations and increases in noncommunicable diseases.

2. New obesity treatments will be developed

Non-communicable diseases account for 56% of deaths in South Africa, and obesity is a major risk factor, along with hypertension and hyperglycaemia, which are often comorbid. GLP-1 agonists were all over the news in 2023 and 2024 as they became approved in certain countries for the treatment of obesity. But in South Africa, they are only approved for use in obesity with a diabetes diagnosis, after diet and exercise have failed to make a difference, with one exception. Doctors also caution against using them as a ‘silver bullet’. Some are calling for cost reductions as they can be quite expensive; a generic for liraglutide in SA is expected in the next few years.

Further on the horizon, there are a host of experimental drugs undergoing testing for obesity treatment, according to a review published in Nature. While GLP-1 remains a target for many new drugs, others focus on gut hormones involved in appetite: GIP-1, glucagon, PYY and amylin. There are 5 new drugs in Phase 3 trials, expected variously to finish between 2025 and 2027, 10 drugs in Phase 2 clinical trials and 18 in Phase 1. Some are also finding applications beside obesity. The GLP-1 agonist survodutide, for example have received FDA approval not for obesity but for liver fibrosis.

With steadily increasing rates of overweight/obesity and disorders associated with them, this will continue to be a prominent research area. In the US, where the health costs of poor diet match what consumers spend on groceries, ‘food as medicine’ has become a major buzzword as companies strive to deliver healthy nutritional solutions. Retailers are providing much of the push, and South Africa is no exception. Medical aid scheme benefits are giving way to initiatives such as Pick n Pay’s Live Well Club, which simply offers triple Smart Shopper points to members who sign up.

Another promising approach to the obesity fight is precision medicine, which factors in many data about the patient to identify the best interventions. This could include detailed study of energy balance regulation, helping to select the right antiobesity medication based on actionable behavioural and phsyiologic traits. Genotyping, multi-omics, and big data analysis are growing fields that might also uncover additional signatures or phenotypes better responsive to certain interventions.

3. AI tools become the norm

Wearable health monitoring technology has gone from the lab to commonly available consumer products. Continued innovation in this field will lead to cheaper, more accurate devices with greater functionality. Smart rings, microneedle patches and even health monitoring using Bluetooth earphones such as Apple’s Airpods show how these devices are becoming smaller and more discrete. But health insurance schemes remain unconvinced as to their benefits.

After making a huge splash in 2024 as it rapidly evolved, AI technology is now maturing and entering a consolidation phase. Already, its use has become commonplace in many areas: the image at the top of the article is AI-generated, although it took a few attempts with the doctors exhibiting polydactyly and AI choosing to write “20215” instead of “2025”. An emerging area is to use AI in patient phenotyping (classifying patients based on biological, behavioural, or genetic attributes) and digital twins (virtual simulations of individual patients), enabling precision medicine. Digital twins for example, can serve as a “placebo” in a trial of a new treatment, as is being investigated in ALS research.

Rather than replacing human doctors, it is likely that AI’s key application is reducing lowering workforce costs, a major component of healthcare costs. Chatbots, for example, could engage with patients and help them navigate the healthcare system. Other AI application include tools to speed up and improve diagnosis, eg in radiology, and aiding communication within the healthcare system by helping come up with and structure notes.

4. Emerging solutions to labour shortages

Given the long lead times to recruit and train healthcare workers, 2025 will not likely see any change to the massive shortages of all positions from nurses to specialists.

At the same time, public healthcare has seen freezes on hiring resulting in the paradoxical situation of unemployed junior doctors in a country desperately in need of more doctors – 800 at the start of 2024 were without posts. The DA has tabled a Bill to amend the Health Professions Act at would allow private healthcare to recruit interns and those doing community service. Critics have pointed out that it would exacerbate the existing public–private healthcare gap.

But there are some welcome developments: thanks to a five-year plan from the Department of Health, family physicians in SA are finally going to get their chance to shine and address many problems in healthcare delivery. These ‘super generalists’ are equipped with a four-year specialisation and are set to take up roles as clinical managers, leading multi-disciplinary district hospital teams.

Less obvious is where the country will be able to secure enough nurses to meet its needs. The main challenge is that nurses, especially specialist nurses, are ageing – and it’s not clear where their replacements are coming from. In the next 15 years, some 48% of the country’s nurses are set to retire. Coupled with that is the general consensus that the new nursing training curriculum is a flop: the old one, from 1987 to 2020, produced nurses with well-rounded skills, says Simon Hlungwani, president of the Democratic Nursing Organisation of South Africa (Denosa). There’s also a skills bottleneck: institutions like Baragwanath used to cater for 300 students at a time, now they are only approved to handle 80. The drive for recruitment will also have to be accompanied by some serious educational reform to get back on track.

5. Progress against many diseases

Sub-Saharan Africa continues to drive declines in new HIV infections.  Lifetime odds of getting HIV have fallen by 60% since the 1995 peak. It also saw the largest decrease in population without a suppressed level of HIV (PUV), from 19.7 million people in 2003 to 11.3 million people in 2021. While there is a slowing in the increase of population living with HIV, it is predicted to peak by 2039 at 44.4 million people globally. But the UNAIDS HIV targets for 2030 are unlikely to be met.

As human papillomavirus (HPV) vaccination programmes continue, cervical cancer deaths in young women are plummeting, a trend which is certain to continue.

A ‘new’ respiratory virus currently circulating in China will fortunately not be the next COVID. Unlike SARS-CoV-2, human metapneumovirus (HMPV) has been around for decades, and only causes a few days of mild illness, with bed rest and fluids as the primary treatment. The virus has limited pandemic potential, according to experts.

AI Developed “Beer Goggles” Looking at Knee X-rays

Photo by Pavel Danilyuk on Pexels

Medicine, like most fields, is transforming as the capabilities of artificial intelligence expand at lightning speed. AI integration can be a useful tool to healthcare professionals and researchers, including in interpretation of diagnostic imaging. Where a radiologist can identify fractures and other abnormalities from an X-ray, AI models can see patterns humans cannot, offering the opportunity to expand the effectiveness of medical imaging.

A study led by Dartmouth Health researchers, in collaboration with the Veterans Affairs Medical Center in White River Junction, VT, and published in Nature’s Scientific Reports, highlights the hidden challenges of using AI in medical imaging research. The study examined highly accurate yet potentially misleading results – a phenomenon known as “shortcut learning.”

Using knee X-rays from the Osteoarthritis Initiative, researchers demonstrated that AI models could “predict” unrelated and implausible traits, such as whether patients abstained from eating refried beans or drinking beer. While these predictions have no medical basis, the models achieved surprising levels of accuracy, revealing their ability to exploit subtle and unintended patterns in the data.

“While AI has the potential to transform medical imaging, we must be cautious,” said Peter L. Schilling, MD, MS, an orthopaedic surgeon at Dartmouth Health’s Dartmouth Hitchcock Medical Center (DHMC) and an assistant professor of orthopaedics in Dartmouth’s Geisel School of Medicine, who served as senior author on the study. “These models can see patterns humans cannot, but not all patterns they identify are meaningful or reliable. It’s crucial to recognize these risks to prevent misleading conclusions and ensure scientific integrity.”

Schilling and his colleagues examined how AI algorithms often rely on confounding variables – such as differences in X-ray equipment or clinical site markers to make predictions – rather than medically meaningful features. Attempts to eliminate these biases were only marginally successful: the AI models would just “learn” other hidden data patterns.

The research team’s findings underscore the need for rigorous evaluation standards in AI-based medical research. Over-reliance on standard algorithms without deeper scrutiny could lead to erroneous clinical insights and treatment pathways.

“This goes beyond bias from clues of race or gender,” said Brandon G. Hill, a machine learning scientist at DHMC and one of Schilling’s co-authors. “We found the algorithm could even learn to predict the year an X-ray was taken. It’s pernicious; when you prevent it from learning one of these elements, it will instead learn another it previously ignored. This danger can lead to some really dodgy claims, and researchers need to be aware of how readily this happens when using this technique.”

“The burden of proof just goes way up when it comes to using models for the discovery of new patterns in medicine,” Hill continued. “Part of the problem is our own bias. It is incredibly easy to fall into the trap of presuming that the model ‘sees’ the same way we do. In the end, it doesn’t. It is almost like dealing with an alien intelligence. You want to say the model is ‘cheating,’ but that anthropomorphizes the technology. It learned a way to solve the task given to it, but not necessarily how a person would. It doesn’t have logic or reasoning as we typically understand it.”

To read Schilling and Hill’s study – which was also authored by Frances L. Koback, a third-year student at the Geisel School of Medicine at Dartmouth – visit bit.ly/4gox9jq.

Source: Dartmouth College

Analysis of Repeat Mammograms Improves Cancer Prediction

Photo by National Cancer Institute on Unsplash

A new study describes an innovative method of analysing mammograms that significantly improves the accuracy of predicting the risk of breast cancer development over the following five years. Using up to three years of previous mammograms, the new method identified individuals at high risk of developing breast cancer 2.3 times more accurately than the standard method, which is based on questionnaires assessing clinical risk factors alone, such as age, race and family history of breast cancer.

The study, from Washington University School of Medicine in St. Louis, appears in JCO Clinical Cancer Informatics.

“We are seeking ways to improve early detection, since that increases the chances of successful treatment,” said senior author Graham A. Colditz, MD, DrPH, associate director, prevention and control, of Siteman Cancer Center, based at Barnes-Jewish Hospital and WashU Medicine. “This improved prediction of risk also may help research surrounding prevention, so that we can find better ways for women who fall into the high-risk category to lower their five-year risk of developing breast cancer.”

This risk-prediction method builds on past research led by Colditz and lead author Shu (Joy) Jiang, PhD, a statistician, data scientist and associate professor at WashU Medicine. The researchers showed that prior mammograms hold a wealth of information on early signs of breast cancer development that can’t be perceived even by a well-trained human eye. This information includes subtle changes over time in breast density, which is a measure of the relative amounts of fibrous versus fatty tissue in the breasts.

For the new study, the team built an algorithm based on artificial intelligence that can discern subtle differences in mammograms and help identify those women at highest risk of developing a new breast tumour over a specific timeframe. In addition to breast density, their machine-learning tool considers changes in other patterns in the images, including in texture, calcification and asymmetry within the breasts.

“Our new method is able to detect subtle changes over time in repeated mammogram images that are not visible to the eye,” said Jiang, yet these changes hold rich information that can help identify high-risk individuals.

At the moment, risk-reduction options are limited and can include drugs such as tamoxifen that lower risk but may have unwanted side effects. Most of the time, women at high risk are offered more frequent screening or the option of adding another imaging method, such as an MRI, to try to identify cancer as early as possible.

“Today, we don’t have a way to know who is likely to develop breast cancer in the future based on their mammogram images,” said co-author Debbie L. Bennett, MD, an associate professor of radiology and chief of breast imaging for the Mallinckrodt Institute of Radiology at WashU Medicine. “What’s so exciting about this research is that it indicates that it is possible to glean this information from current and prior mammograms using this algorithm. The prediction is never going to be perfect, but this study suggests the new algorithm is much better than our current methods.”

AI improves prediction of breast cancer development

The researchers trained their machine-learning algorithm on the mammograms of more than 10 000 women who received breast cancer screenings through Siteman Cancer Center from 2008–2012. These individuals were followed through 2020, and in that time 478 were diagnosed with breast cancer.

The researchers then applied their method to predict breast cancer risk in a separate set of 18 000 women who received mammograms from 2013–2020. Subsequently, 332 women were diagnosed with breast cancer during the follow-up period, which ended in 2020.

According to the new prediction model, women in the high-risk group were 21 times more likely to be diagnosed with breast cancer over the following five years than were those in the lowest-risk group. In the high-risk group, 53 out of every 1000 women screened developed breast cancer over the next five years. In contrast, in the low-risk group, 2.6 women per 1000 screened developed breast cancer over the following five years. Under the old questionnaire-based methods, only 23 women per 1000 screened were correctly classified in the high-risk group, providing evidence that the old method, in this case, missed 30 breast cancer cases that the new method found.

The mammograms were conducted at academic medical centres and community clinics, demonstrating that the accuracy of the method holds up in diverse settings. Importantly, the algorithm was built with robust representation of Black women, who are usually underrepresented in development of breast cancer risk models. The accuracy for predicting risk held up across racial groups. Of the women screened through Siteman, most were white, and 27% were Black. Of those screened through Emory, 42% were Black.

Source: Washington University School of Medicine in St. Louis

Is AI a Better Doctors’ Diagnostic Resource than Traditional Ones?

With hospitals already deploying artificial intelligence (AI) to improve patient care, a new study has found that using Chat GPT Plus does not significantly improve the accuracy of doctors’ diagnoses when compared with the use of usual resources. 

The study, from UVA Health’s Andrew S. Parsons, MD, MPH and colleagues, enlisted 50 physicians in family medicine, internal medicine and emergency medicine to put Chat GPT Plus to the test. Half were randomly assigned to use Chat GPT Plus to diagnose complex cases, while the other half relied on conventional methods such as medical reference sites (for example, UpToDate©) and Google. The researchers then compared the resulting diagnoses, finding that the accuracy across the two groups was similar.

That said, Chat GPT alone outperformed both groups, suggesting that it still holds promise for improving patient care. Physicians, however, will need more training and experience with the emerging technology to capitalise on its potential, the researchers conclude. 

For now, Chat GPT remains best used to augment, rather than replace, human physicians, the researchers say.

“Our study shows that AI alone can be an effective and powerful tool for diagnosis,” said Parsons, who oversees the teaching of clinical skills to medical students at the University of Virginia School of Medicine and co-leads the Clinical Reasoning Research Collaborative. “We were surprised to find that adding a human physician to the mix actually reduced diagnostic accuracy though improved efficiency. These results likely mean that we need formal training in how best to use AI.”

Chat GPT for Disease Diagnosis

Chatbots called “large language models” that produce human-like responses are growing in popularity, and they have shown impressive ability to take patient histories, communicate empathetically and even solve complex medical cases. But, for now, they still require the involvement of a human doctor. 

Parsons and his colleagues were eager to determine how the high-tech tool can be used most effectively, so they launched a randomized, controlled trial at three leading-edge hospitals – UVA Health, Stanford and Harvard’s Beth Israel Deaconess Medical Center.

The participating docs made diagnoses for “clinical vignettes” based on real-life patient-care cases. These case studies included details about patients’ histories, physical exams and lab test results. The researchers then scored the results and examined how quickly the two groups made their diagnoses. 

The median diagnostic accuracy for the docs using Chat GPT Plus was 76.3%, while the results for the physicians using conventional approaches was 73.7%. The Chat GPT group members reached their diagnoses slightly more quickly overall – 519 seconds compared with 565 seconds.

The researchers were surprised at how well Chat GPT Plus alone performed, with a median diagnostic accuracy of more than 92%. They say this may reflect the prompts used in the study, suggesting that physicians likely will benefit from training on how to use prompts effectively. Alternately, they say, healthcare organisations could purchase predefined prompts to implement in clinical workflow and documentation.

The researchers also caution that Chat GPT Plus likely would fare less well in real life, where many other aspects of clinical reasoning come into play – especially in determining downstream effects of diagnoses and treatment decisions. They’re urging additional studies to assess large language models’ abilities in those areas and are conducting a similar study on management decision-making. 

“As AI becomes more embedded in healthcare, it’s essential to understand how we can leverage these tools to improve patient care and the physician experience,” Parsons said. “This study suggests there is much work to be done in terms of optimising our partnership with AI in the clinical environment.”

Following up on this groundbreaking work, the four study sites have also launched a bicoastal AI evaluation network called ARiSE (AI Research and Science Evaluation) to further evaluate GenAI outputs in healthcare. Find out more information at the ARiSE website.

Source: University of Virginia Health System

Researchers Find Persistent Problems with AI-assisted Genomic Studies

Photo by Sangharsh Lohakare on Unsplash

In a paper published in Nature Genetics, researchers are warning that artificial intelligence tools gaining popularity in the fields of genetics and medicine can lead to flawed conclusions about the connection between genes and physical characteristics, including risk factors for diseases like diabetes.

The faulty predictions are linked to researchers’ use of AI to assist genome-wide association studies, according to the University of Wisconsin–Madison researchers. Such studies scan through hundreds of thousands of genetic variations across many people to hunt for links between genes and physical traits. Of particular interest are possible connections between genetic variations and certain diseases.

Genetics’ link to disease not always straightforward

Genetics play a role in the development of many health conditions. While changes in some individual genes are directly connected to an increased risk for diseases like cystic fibrosis, the relationship between genetics and physical traits is often more complicated.

Genome-wide association studies have helped to untangle some of these complexities, often using large databases of individuals’ genetic profiles and health characteristics, such as the National Institutes of Health’s All of Us project and the UK Biobank. However, these databases are often missing data about health conditions that researchers are trying to study.

“Some characteristics are either very expensive or labour-intensive to measure, so you simply don’t have enough samples to make meaningful statistical conclusions about their association with genetics,” says Qiongshi Lu, an associate professor in the UW–Madison Department of Biostatistics and Medical Informatics and an expert on genome-wide association studies.

The risks of bridging data gaps with AI

Researchers are increasingly attempting to work around this problem by bridging data gaps with ever more sophisticated AI tools.

“It has become very popular in recent years to leverage advances in machine learning, so we now have these advanced machine-learning AI models that researchers use to predict complex traits and disease risks with even limited data,” Lu says.

Now, Lu and his colleagues have demonstrated the peril of relying on these models without also guarding against biases they may introduce. In their paper, they show that a common type of machine learning algorithm employed in genome-wide association studies can mistakenly link several genetic variations with an individual’s risk for developing Type 2 diabetes.

“The problem is if you trust the machine learning-predicted diabetes risk as the actual risk, you would think all those genetic variations are correlated with actual diabetes even though they aren’t,” says Lu.

These “false positives” are not limited to these specific variations and diabetes risk, Lu adds, but are a pervasive bias in AI-assisted studies.

New statistical method can reduce false positives

In addition to identifying the problem with overreliance on AI tools, Lu and his colleagues propose a statistical method that researchers can use to guarantee the reliability of their AI-assisted genome-wide association studies. The method helps remove bias that machine learning algorithms can introduce when they’re making inferences based on incomplete information.

“This new strategy is statistically optimal,” Lu says, noting that the team used it to better pinpoint genetic associations with individuals’ bone mineral density.

AI not the only problem with some genome-wide association studies

While the group’s proposed statistical method could help improve the accuracy of AI-assisted studies, Lu and his colleagues also recently identified problems with similar studies that fill data gaps with proxy information rather than algorithms.

In another recently published paper appearing in Nature Genetics, the researchers sound the alarm about studies that over-rely on proxy information in an attempt to establish connections between genetics and certain diseases.

For instance, large health databases like the UK Biobank have a ton of genetic information about large populations, but they don’t have very much data regarding the incidence of diseases that tend to crop up later in life, like most neurodegenerative diseases.

For Alzheimer’s disease specifically, some researchers have attempted to bridge that gap with proxy data gathered through family health history surveys, where individuals can report a parent’s Alzheimer’s diagnosis.

The UW–Madison team found that such proxy-information studies can produce “highly misleading genetic correlation” between Alzheimer’s risk and higher cognitive abilities.

“These days, genomic scientists routinely work with biobank datasets that have hundreds of thousands of individuals; however, as statistical power goes up, biases and the probability of errors are also amplified in these massive datasets,” says Lu. “Our group’s recent studies provide humbling examples and highlight the importance of statistical rigor in biobank-scale research studies.”

Source: University of Wisconsin-Madison