Category: IT in Healthcare

‘Medicine in the Dark Ages’: Healthcare Hit Hard by Global IT Outage

Photo by Joshua Hoehne on Unsplash

On Friday, users around the world began encountering a “blue screen of death”, signalling the start of a day of chaos. About 8.5 million Microsoft devices were affected by a bug, resulting in significant global disruption from airlines to finance and even small businesses. Healthcare infrastructure was also affected, which may have endangered an unknown number of lives because of missed appointments, inaccessible patient records, prescriptions and inventory data.

Worldwide, hospitals reported being unable to use their systems to access key information such as schedules, patient medical records and logistics. Reports emerged of cancelled procedures, and non-urgent patients being turned away.

“Many hospitals are cancelling elective procedures today. Patients should direct any questions to their providers because this is a practice-by-practice, hospital-by-hospital decision,” said the Massachusetts Department of Public Health in a statement.

In the UK, NHS England warned of delays and the British Medical Association advised of a backlog for normal GP service.

‘Like practising medicine in the dark ages’

All across Africa reported that many hospitals and clinics depend on Microsoft 365 and cloud services for crucial functions, Nehanda Radio reported. The outage highlights how critical infrastructure has become dependent on the stability of a handful of platforms.

“Our entire hospital was thrown into disarray. We couldn’t access patient files, schedule surgeries, or coordinate with suppliers,” said Dr Amina Salim, the chief medical officer at a major hospital in Abuja, Nigeria.

“It was like practising medicine in the dark ages. Our doctors and nurses were forced to resort to hand-written notes and countless phone calls just to provide basic care.”

“I went to refill my HIV medication and the pharmacist said their computers were down, so they couldn’t look up my prescription. I was worried I’d have to go without my treatment,” said Thembi Ndlovu, a patient in Johannesburg.

The problem was worsened in rural and underserved areas that are heavily reliant on the internet and cloud services for remote consultations, sharing of medical expertise and centralised databases.

“Our telemedicine program came to a screeching halt. We couldn’t video conference with specialists, access test results, or update patient records,” said Dr Khalid Elmahdi, the director of a rural health clinic in Morocco. “It was devastating for communities that have few other options for advanced care.”

The crashes were traced to an update from a security service provider, Crowdstrike – which ironically provides protection solutions against ransomware, a problem that has been plaguing healthcare.

While most services seem to be up and running after the weekend, experts say that it may take weeks for full recovery. Fixing the problem often requires physically accessing the system and installing a USB dongle with recovery software, which can be difficult in certain locations, such as remote clinic.

Hacked Healthcare: New KnowBe4 Report Shines a Spotlight on Cybersecurity Crisis in Sector

Report shows the alarming global rise of cyberattacks on the healthcare sector and the urgent need to prioritise cybersecurity

Photo by Nahel Abdul on Unsplash

KnowBe4 (www.KnowBe4.com), the provider of the world’s largest security awareness training and simulated phishing platform, released its International Healthcare Report. The report takes a closer look at the cybersecurity crisis currently experienced by the healthcare sector, in particular hospital groups, across the world.

Africa was the global region with the highest average number of weekly cyberattacks per organisation in 2023. One in every 19 organisations on the continent experienced an attempted attack every week. Although South Africa’s healthcare sector has managed to avoid a major attack since 2020, the alarming escalation of attacks in other sectors within the country suggests that it’s only a matter of time before the next attack strikes, making it a question of “when” rather than “if”.

Hospitals have become increasingly attractive targets for ransomware attacks due to their comprehensive patient databases, sensitive information, and their interconnectedness between systems and equipment. Moreover, poor security measures have made hospitals vulnerable to cyber threats. When attacked, cybercriminals can potentially take control of entire hospital systems, and gain access not only to patients’ health information but also their financial and insurance data.

Hospitals are severely impacted by cyberattacks (https://apo-opa.co/4csCXH4), which can lead to a reduction in patient care, loss of access to electronic systems, and a reliance on incomplete paper records. This can also result in the cancellation of surgeries, tests, appointments, and, in some cases, even loss of life.

Some shocking facts discussed in the report include:

  • In the first three quarters of 2023, the global healthcare sector experienced a staggering 1,613 cyberattacks per week, nearly four times the global average, and a significant increase from the same period the previous year.
  • The healthcare sector has seen a dramatic surge in cyberattack costs over the past three years, with the average cost of a breach reaching nearly $11 million, more than three times the global average. This makes healthcare the costliest sector for cyberattacks.
  • Ransomware attacks have been the most prevalent type of cyberattack on healthcare organisations, accounting for over 70% of successful attacks in the past two years.
  • The majority of cyberattacks (between 79% and 91%), across sectors, begin with phishing or social engineering tactics, which allow cybercriminals to gain access to accounts or servers.
  • According to KnowBe4’s 2024 Phishing by Industry Benchmarking Report (https://apo-opa.co/4csuiEB), healthcare and pharmaceutical organisations are among the most vulnerable to phishing attacks, with employees in large organisations in the sector having a 51.4% likelihood of falling victim to a phishing email. This means that cybercriminals have a better than 50/50 chance of successfully phishing an employee in the sector.

“The healthcare sector remains a prime target for cybercriminals looking to capitalise on the life-or-death situations hospitals face,” says Stu Sjouwerman, CEO of KnowBe4. “With patient data and critical systems held hostage, many hospitals feel like they are left with no choice but to pay exorbitant ransoms. This vicious cycle can be broken by prioritising comprehensive security awareness training to empower employees and cultivate a positive security culture as a strong defence against phishing and social engineering attacks.”

The report examines the state of cybersecurity in the healthcare sector in North America, Europe, the United Kingdom, Asia-Pacific, Africa, and Latin America. In addition it also highlights some of the most prolific global ransomware attacks that occurred between December 2023 and May 2024, the aftermath thereof and what healthcare organisations can do to protect themselves from cyberattacks.

To download a copy of KnowBe4’s International Healthcare Report, click here (https://apo-opa.co/3xIjjaY).

AI Models that can Identify Patient Demographics in X-rays are Also Unfair

Photo by Anna Shvets

Artificial intelligence models often play a role in medical diagnoses, especially when it comes to analysing images such as X-rays. But these models have been found not perform as well across all demographic groups, usually faring worse on women and people of colour.

These models have also been shown to develop some surprising abilities. In 2022, MIT researchers reported that AI models can make accurate predictions about a patient’s race from their chest X-rays – something that the most skilled radiologists can’t do.

Now, in a new study appearing in Nature, the same research team has found that the models that are most accurate at making demographic predictions also show the biggest “fairness gaps”, ie having reduced accuracy diagnosing images of people of different races or genders. The findings suggest that these models may be using “demographic shortcuts” when making their diagnostic evaluations, which lead to incorrect results for women, Black people, and other groups, the researchers say.

“It’s well-established that high-capacity machine-learning models are good predictors of human demographics such as self-reported race or sex or age. This paper re-demonstrates that capacity, and then links that capacity to the lack of performance across different groups, which has never been done,” says senior author Marzyeh Ghassemi, an MIT associate professor of electrical engineering and computer science.

The researchers also found that they could retrain the models in a way that improves their fairness. However, their approached to “debiasing” worked best when the models were tested on the same types of patients they were trained on, such as patients from the same hospital. When these models were applied to patients from different hospitals, the fairness gaps reappeared.

“I think the main takeaways are, first, you should thoroughly evaluate any external models on your own data because any fairness guarantees that model developers provide on their training data may not transfer to your population. Second, whenever sufficient data is available, you should train models on your own data,” says Haoran Zhang, an MIT graduate student and one of the lead authors of the new paper.

Removing bias

As of May 2024, the FDA has approved 882 AI-enabled medical devices, with 671 of them designed to be used in radiology. Since 2022, when Ghassemi and her colleagues showed that these diagnostic models can accurately predict race, they and other researchers have shown that such models are also very good at predicting gender and age, even though the models are not trained on those tasks.

“Many popular machine learning models have superhuman demographic prediction capacity – radiologists cannot detect self-reported race from a chest X-ray,” Ghassemi says. “These are models that are good at predicting disease, but during training are learning to predict other things that may not be desirable.”

In this study, the researchers set out to explore why these models don’t work as well for certain groups. In particular, they wanted to see if the models were using demographic shortcuts to make predictions that ended up being less accurate for some groups. These shortcuts can arise in AI models when they use demographic attributes to determine whether a medical condition is present, instead of relying on other features of the images.

Using publicly available chest X-ray datasets from Beth Israel Deaconess Medical Center (BIDMC) in Boston, the researchers trained models to predict whether patients had one of three different medical conditions: fluid buildup in the lungs, collapsed lung, or enlargement of the heart. Then, they tested the models on X-rays that were held out from the training data.

Overall, the models performed well, but most of them displayed “fairness gaps” – that is, discrepancies between accuracy rates for men and women, and for white and Black patients.

The models were also able to predict the gender, race, and age of the X-ray subjects. Additionally, there was a significant correlation between each model’s accuracy in making demographic predictions and the size of its fairness gap. This suggests that the models may be using demographic categorisations as a shortcut to make their disease predictions.

The researchers then tried to reduce the fairness gaps using two types of strategies. For one set of models, they trained them to optimise “subgroup robustness,” meaning that the models are rewarded for having better performance on the subgroup for which they have the worst performance, and penalised if their error rate for one group is higher than the others.

In another set of models, the researchers forced them to remove any demographic information from the images, using “group adversarial” approaches. Both strategies worked fairly well, the researchers found.

“For in-distribution data, you can use existing state-of-the-art methods to reduce fairness gaps without making significant trade-offs in overall performance,” Ghassemi says. “Subgroup robustness methods force models to be sensitive to mispredicting a specific group, and group adversarial methods try to remove group information completely.”

Not always fairer

However, those approaches only worked when the models were tested on data from the same types of patients that they were trained on, eg from BIDMC.

When the researchers tested the models that had been “debiased” using the BIDMC data to analyse patients from five other hospital datasets, they found that the models’ overall accuracy remained high, but some of them exhibited large fairness gaps.

“If you debias the model in one set of patients, that fairness does not necessarily hold as you move to a new set of patients from a different hospital in a different location,” Zhang says.

This is worrisome because in many cases, hospitals use models that have been developed on data from other hospitals, especially in cases where an off-the-shelf model is purchased, the researchers say.

“We found that even state-of-the-art models which are optimally performant in data similar to their training sets are not optimal – that is, they do not make the best trade-off between overall and subgroup performance – in novel settings,” Ghassemi says. “Unfortunately, this is actually how a model is likely to be deployed. Most models are trained and validated with data from one hospital, or one source, and then deployed widely.”

The researchers found that the models that were debiased using group adversarial approaches showed slightly more fairness when tested on new patient groups than those debiased with subgroup robustness methods. They now plan to try to develop and test additional methods to see if they can create models that do a better job of making fair predictions on new datasets.

The findings suggest that hospitals that use these types of AI models should evaluate them on their own patient population before beginning to use them, to make sure they aren’t giving inaccurate results for certain groups.

Datacentres Form Part of Healthcare Critical Systems – Carrying the Load and so Much More

Photo by Christina Morillo

By Ben Selier, Vice President: Secure Power, Anglophone Africa at Schneider Electric

The adage, knowledge is king couldn’t be more applicable when it comes to the collection and utilisation of data.  And at the heart of this knowledge and resultant information lies the datacentre. Businesses and users count on datacentres, and more so in critical services such as healthcare.

Many hospitals today rely heavily on electronic health records (EHR), and this information resides and is backed up in on-premises datacentres or in the cloud. Datacentres are therefore a major contributor to effective and modernised healthcare.

There are several considerations when designing datacentres for healthcare. For one, hospitals operate within stringent legislation when it comes to the protection of patient information.  The National Health Act (No. 61 of 2003), for example, stipulates that information must not be given to others unless the patient consents or the healthcare practitioner can justify the disclosure.

Datacentres form part of critical systems

To add an extra layer of complexity, in South Africa, datacentres should feature built-in continuous uptime and energy backup due to the country’s unstable power supply.  Hospitals must therefore be designed to be autonomous from the grid, especially when they provide emergency and critical care.

Typically, datacentres are classified in tiers, with the Uptime Institute citing that a Tier-4 datacentre provides 99.995% availability, annual downtime of 0.4 hours, full redundancy, and power outage protection of 96 hours.

In healthcare and when one considers human lives, downtime is simply not an option. And whilst certain healthcare systems and its resultant availability are comparable to a typical Tier-3 or Tier-4 scenario, critical systems in hospitals carry a higher design consideration and must run 24/7 with immediate availability.

In healthcare, the critical infrastructure of a hospital enjoys priority.  What this means is the datacentre is there to protect the IT system which in turn ensures the smooth running of these critical systems and equipment.  There is therefore a delicate balance between the critical systems and infrastructure, and the datacentre, one can’t exist without the other.

Design considerations

To realise the above, hospitals must feature a strong mix of alternative energy resources such as backup generators, uninterrupted power supply (UPS) and renewables such as rooftop solar.

Additionally, like most organisations, storage volume and type and cloud systems will also vary from hospital to hospital. To this end, datacentre design for hospitals is anything but cookie cutter; teams need to work closely with the hospital whilst meeting industry standards for healthcare.

When designing healthcare facilities system infrastructure, the following should also be considered:

  • Software like Building Management Systems (BMS) are not just about building efficiency but also offer benefits such as monitoring and adjusting indoor conditions like temperature control, humidity, and air quality.

The BMS contributes to health and safety and critical operations in hospitals whilst also enabling patient comfort.

  • Maintenance – both building and systems maintenance transcend operational necessity and become a matter of life or death.
  • As mentioned, generators are essential when delivering continuous power which means enough fuel must be stored to run it. Here, hospitals must store fuel safely and in compliance with stringent regulations. In South Africa, proactively managing the refuelling timelines is also critical.  The response times of refuelling these (fuel) bunkers can be severely hindered by issues such as traffic congestion as a result of outages and lights now working.

Selecting the right equipment for hospitals is therefore a delicate balance between technological advancement and safety. For instance, while lithium batteries offer many benefits, when used in hospitals, it is paramount that it is also stored in dry, cool and safe location.

Here, implementing an extinguishing system is a must to alleviate any potential damage from fire or explosions.  That said, lithium batteries are generally considered safe to use but it’s important to be cognisant of its potential safety hazards.

Ultimately, hospitals carry the added weight of human lives which means the design of critical systems require meticulously planning and executed.

AI Analyses Fitbit Data to Predict Spine Surgery Outcomes

Photo by Barbara Olsen on Pexels

Researchers who had been using Fitbit data to help predict surgical outcomes have a new method to more accurately gauge how patients may recover from spine surgery.

Using machine learning techniques developed at the AI for Health Institute at Washington University in St. Louis, Chenyang Lu, the Fullgraf Professor in the university’s McKelvey School of Engineering, collaborated with Jacob Greenberg, MD, assistant professor of neurosurgery at the School of Medicine, to develop a way to predict recovery more accurately from lumbar spine surgery.

The results show that their model outperforms previous models to predict spine surgery outcomes. This is important because in lower back surgery and many other types of orthopaedic operations, the outcomes vary widely depending on the patient’s structural disease but also varying physical and mental health characteristics across patients. The study is published in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

Surgical recovery is influenced by both preoperative physical and mental health. Some people may have catastrophising, or excessive worry, in the face of pain that can make pain and recovery worse. Others may suffer from physiological problems that cause worse pain. If physicians can get a heads-up on the various pitfalls for each patient, that will allow for better individualized treatment plans.

“By predicting the outcomes before the surgery, we can help establish some expectations and help with early interventions and identify high risk factors,” said Ziqi Xu, a PhD student in Lu’s lab and first author on the paper.

Previous work in predicting surgery outcomes typically used patient questionnaires given once or twice in clinics that capture only one static slice of time.

“It failed to capture the long-term dynamics of physical and psychological patterns of the patients,” Xu said. Prior work training machine learning algorithms focus on just one aspect of surgery outcome “but ignore the inherent multidimensional nature of surgery recovery,” she added.

Researchers have used mobile health data from Fitbit devices to monitor and measure recovery and compare activity levels over time but this research has shown that activity data, plus longitudinal assessment data, is more accurate in predicting how the patient will do after surgery, Greenberg said.

The current work offers a “proof of principle” showing, with the multimodal machine learning, doctors can see a much more accurate “big picture” of all the interrelated factors that affect recovery. Proceeding this work, the team first laid out the statistical methods and protocol to ensure they were feeding the AI the right balanced diet of data.

Prior to the current publication, the team published an initial proof of principle in Neurosurgery showing that patient-reported and objective wearable measurements improve predictions of early recovery compared to traditional patient assessments. In addition to Greenberg and Xu, Madelynn Frumkin, a PhD psychological and brain sciences student in Thomas Rodebaugh’s laboratory in Arts & Sciences, was co-first author on that work. Wilson “Zack” Ray, MD, the Henry G. and Edith R. Schwartz Professor of neurosurgery in the School of Medicine, was co-senior author, along with Rodebaugh and Lu. Rodebaugh is now at the University of North Carolina at Chapel Hill.

In that research, they show that Fitbit data can be correlated with multiple surveys that assess a person’s social and emotional state. They collected that data via “ecological momentary assessments” (EMAs) that employ smart phones to give patients frequent prompts to assess mood, pain levels and behaviour multiple times throughout day.

We combine wearables, EMA -and clinical records to capture a broad range of information about the patients, from physical activities to subjective reports of pain and mental health, and to clinical characteristics,” Lu said.

Greenberg added that state-of-the-art statistical tools that Rodebaugh and Frumkin have helped advance, such as “Dynamic Structural Equation Modeling,” were key in analyzing the complex, longitudinal EMA data.

For the most recent study they then took all those factors and developed a new machine learning technique of “Multi-Modal Multi-Task Learning (M3TL)” to effectively combine these different types of data to predict multiple recovery outcomes.

In this approach, the AI learns to weigh the relatedness among the outcomes while capturing their differences from the multimodal data, Lu adds.

This method takes shared information on interrelated tasks of predicting different outcomes and then leverages the shared information to help the model understand how to make an accurate prediction, according to Xu.

It all comes together in the final package producing a predicted change for each patient’s post-operative pain interference and physical function score.

Greenberg says the study is ongoing as they continue to fine tune their models so they can take these more detailed assessments, predict outcomes and, most notably, “understand what types of factors can potentially be modified to improve longer term outcomes.”

Source: Washington University in St. Louis

Earn CPD Points with EthiQal’s Webinar on Record Keeping

On Wednesday 5 June at 18:00, EthiQal cordially invites you to attend their ethics webinar, “Documenting care: Effective record-keeping and requests for records”.

Hosted by Dr Hlombe Makuluma, Medicolegal Advisor at EthiQal, this webinar will be co-presented by two admitted attorneys, Mashooma Parker and Jessica Viljoen, who are both legal advisors within the claims team at EthiQal. The 90-minute session will cover compliance for record-keeping requirements as well as dealing with requests for patient records from patients and third parties.

Participants will gain valuable insights to ethically enhance their practice’s visibility and reach, fostering responsible and compliant advertising practices.

Mashooma Parker is a skilled Legal Advisor within the Claims & Legal team at EthiQal, specialising in medical malpractice. With a strong background in the legal field and a passion for assisting healthcare practitioners, Mashooma brings a wealth of expertise to navigate the complexities that arise with patients and third parties. Hosting the first topic, She will cover the requirements for healthcare practitioners to ensure quality record-keeping compliance with Booklet 9 of the HPCSA’s Ethical Guidelines.

Jessica Viljoen is an admitted attorney and legal advisor specialising in professional indemnity insurance for healthcare practitioners, and medical malpractice law. With her extensive experience within the medico-legal space, including her years of litigation experience, Jessica leverages her industry knowledge to provide legal advice and assistance to all specialties of medical practitioners throughout South Africa. She will present the second part of the talk, which will deal with Patient and Third-party requests for patient records and how to ensure compliance with the Promotion of Access to Information Act 2 of 2000.

The speakers will offer some useful tips from a medico-legal risk management perspective for health practitioners to be cognisant of, as well as to work through some practical examples to illustrate the importance of the topic.

At least one hour’s attendance on the Zoom Platform is required to earn CPD points, and for those unable to watch it live, a recording will be made available.

Click here to register now

Best Practice in POPIA Compliance in TeleHealth

By Wayne Janneker, Executive for Mining Industrial and Health Management at BCX

In the intricate field of healthcare, where privacy and patient’s data security are of utmost importance the Protection of Personal Information Act (POPIA) emerges as a cornerstone legislation. Specifically crafted to safeguard individual privacy, POPIA carries profound implications for the healthcare sector, particularly in the protection of a patient’s medical data.

POPIA establishes a framework for healthcare professionals, mandating that they exert reasonable efforts to inform patients before obtaining personal information from alternative sources. The Act places significant emphasis on the secure and private management of patient’s medical records, instilling a sense of responsibility within the healthcare community.

Section 26 of the Act unequivocally prohibits the processing of personal health information, yet Section 32(1) introduces a caveat. This section extends exemptions to medical professionals and healthcare institutions, but only under the condition that such information is essential for providing proper treatment and care pathways. It’s a delicate balance, ensuring the patient’s well-being while respecting the boundaries of privacy.

A breach of POPIA transpires when personal information is acquired without explicit consent, accessed unlawfully, or when healthcare professionals fall short of taking reasonable steps to prevent unauthorised disclosure, potentially causing harm or distress to the patient. The consequences for non-compliance are severe, ranging from substantial monetary compensation to imprisonment.

For healthcare providers, especially those venturing into the realm of telehealth services, navigating POPIA compliance is of critical importance. Good clinical practices become the guiding principles in this journey of upholding patient confidentiality and privacy.

Let’s delve into the essentials of ensuring privacy in healthcare, where understanding the nuances of privacy laws becomes the bedrock for healthcare providers. It’s not merely about keeping up with regulations; it’s about aligning practices with the legal landscape, creating a solid foundation for what follows.

When we shift the focus to telehealth, selecting platforms tailored to meet POPIA requirements becomes even more crucial—it’s imperative. Envision these platforms as protectors of patient information, featuring end-to-end encryption and secure data storage, creating a fortress around sensitive data. But we can’t merely stop there; we need to be proactive. Regular risk assessments become the secret weapon, requiring healthcare providers to stay ahead of the game, constantly evolving, and nipping potential security threats in the bud.

Managing the human element—the healthcare team—becomes significant. Educating them about compliance, data security, and the significance of patient confidentiality adds another layer of protection. When everyone comprehends their role in maintaining compliance, it’s akin to having a team of protectors ensuring the safety of patient data.

Establishing clear policies and procedures around telehealth use, patient consent, and the secure handling of patient data is our compass for ethical and legal navigation. It’s not just about ticking boxes; it’s about creating a roadmap that ensures we’re on the right path.

Informed consent is the cornerstone of this journey. It’s about building trust with patients by transparently communicating through secure communication channels, encryption of patient data, stringent access controls, regular internal audits, and airtight data breach response plans, all of which forms part of a strategy, ensuring a state of readiness to tackle any challenges that come our way.

In this dynamic landscape, technology can’t be static. Regular updates to telehealth technology, software, and security measures are our way of staying in sync with evolving threats and regulations.

Healthcare providers aren’t necessarily experts on the Act or technology, which is why consulting with legal experts specialising in healthcare can provide accurate information on which to base decisions. It ensures that practices aren’t just compliant but resilient against any legal scrutiny that may come their way.

The final and most crucial element is the patient. Their feedback is like a map, guiding healthcare providers to areas of improvement. By monitoring and seeking insights from patients regarding their telehealth experiences, providers uncover ways to enhance their compliance measures.

In embracing these best practices and remaining vigilant to changes, healthcare practitioners and providers can navigate POPIA compliance successfully and deliver high-quality health and telehealth services. It’s a commitment to patient privacy, data security, and the evolving landscape of healthcare regulations that will propel the industry forward.

When it Comes to Healthcare, AI Still Needs Human Supervision

Photo by Tara Winstead on Pexels

State-of-the-art artificial intelligence systems known as large language models (LLMs) are poor medical coders, according to researchers at the Icahn School of Medicine at Mount Sinai. Their study, published in NEJM AI, emphasises the necessity for refinement and validation of these technologies before considering clinical implementation.

The study extracted a list of more than 27 000 unique diagnosis and procedure codes from 12 months of routine care in the Mount Sinai Health System, while excluding identifiable patient data. Using the description for each code, the researchers prompted models from OpenAI, Google, and Meta to output the most accurate medical codes. The generated codes were compared with the original codes and errors were analysed for any patterns.

The investigators reported that all of the studied large language models, including GPT-4, GPT-3.5, Gemini-pro, and Llama-2-70b, showed limited accuracy (below 50%) in reproducing the original medical codes, highlighting a significant gap in their usefulness for medical coding. GPT-4 demonstrated the best performance, with the highest exact match rates for ICD-9-CM (45.9%), ICD-10-CM (33.9%), and CPT codes (49.8%).

GPT-4 also produced the highest proportion of incorrectly generated codes that still conveyed the correct meaning. For example, when given the ICD-9-CM description “nodular prostate without urinary obstruction,” GPT-4 generated a code for “nodular prostate,” showcasing its comparatively nuanced understanding of medical terminology. However, even considering these technically correct codes, an unacceptably large number of errors remained.

The next best-performing model, GPT-3.5, had the greatest tendency toward being vague. It had the highest proportion of incorrectly generated codes that were accurate but more general in nature compared to the precise codes. In this case, when provided with the ICD-9-CM description “unspecified adverse effect of anesthesia,” GPT-3.5 generated a code for “other specified adverse effects, not elsewhere classified.”

“Our findings underscore the critical need for rigorous evaluation and refinement before deploying AI technologies in sensitive operational areas like medical coding,” says study corresponding author Ali Soroush, MD, MS, Assistant Professor of Data-Driven and Digital Medicine (D3M), and Medicine (Gastroenterology), at Icahn Mount Sinai. “While AI holds great potential, it must be approached with caution and ongoing development to ensure its reliability and efficacy in health care.”

Source: The Mount Sinai Hospital / Mount Sinai School of Medicine

AI Helps Clinicians to Assess and Treat Leg Fractures

Photo by Tima Miroshnichenko on Pexels

By using artificial intelligence (AI) techniques to process gait analyses and medical records data of patients with leg fractures, researchers have uncovered insights on patients and aspects of their recovery.

The study, which is published in the Journal of Orthopaedic Research, uncovered a significant association between the rates of hospital readmission after fracture surgery and the presence of underlying medical conditions. Correlations were also found between underlying medical conditions and orthopaedic complications, although these links were not significant.

It was also apparent that gait analyses in the early postinjury phase offer valuable insights into the injury’s impact on locomotion and recovery. For clinical professionals, these patterns were key to optimising rehabilitation strategies.

“Our findings demonstrate the profound impact that integrating machine learning and gait analysis into orthopaedic practice can have, not only in improving the accuracy of post-injury complication predictions but also in tailoring rehabilitation strategies to individual patient needs,” said corresponding author Mostafa Rezapour, PhD, of Wake Forest University School of Medicine. “This approach represents a pivotal shift towards more personalised, predictive, and ultimately more effective orthopaedic care.”

Dr. Rezapour added that the study underscores the critical importance of adopting a holistic view that encompasses not just the mechanical aspects of injury recovery but also the broader spectrum of patient health. “This is a step forward in our quest to optimize rehabilitation strategies, reduce recovery times, and improve overall quality of life for patients with lower extremity fractures,” he said.

Source: Wiley

Admin and Ethics should be the Basis of Your Healthcare AI Stratetgy

Technology continues to play a strong role in shaping healthcare. In 2023, the focus was on how Artificial Intelligence (AI),  became significantly entrenched in patient records, diagnosis and care. Now in 2024 the focus is on the ethical aspects of AI.  Many organisations including practitioner groups, hospitals and medical associations are putting together AI Codes of Conduct, with new legislation planning to be passed in countries such as the USA.

The entire patient journey has benefited from the use of AI, in tangible ways that we can understand. From online bookings, the sharing of information with electronic health records, keyword diagnosis, sharing of visual scans, e-scripts, easy claims, SMS’s and billing, are all examples of how software systems are incorporated into practices to facilitate a streamlined experience for both the patient and doctor. *But although 75% of medical professionals agree on the transformation abilities of AI, only 6% have implemented an AI strategy.

Strategies need to include ethical considerations

CompuGroup Medical South Africa, (CGM SA), a leading international MedTech company that has spent over 20 years designing software solutions for the healthcare industry, has identified one main area that seems to constantly be the topic for ethical consideration.

This is the sharing of patient electronic health records or EHR’s. On one hand the wealth of information provided in each EHR – from a patient’s medical history, demographics, their laboratory test results over time, medicine prescribed, a history of medical procedures, X-rays to any medical allergies – offers endless opportunities for real time patient care. On the other hand, there seems to be a basic mistrust of how these records will be shared and stored, no one wants their personal medical information to end up on the internet.

But there’s also the philosophical view that although you might not want your info to be public record, it still has the ability to benefit the care of thousands of people. If we want a learning AI system that adapts as we do, if we want a decision making support system that is informed by past experiences, then the sharing of data should be viewed as a tool and no longer a privacy barrier.

Admin can cause burnout

Based on their interactions with professionals, CGM has informally noted that healthcare practices spend 73% of their time dealing with administrative tasks. This can be broken down into 38% focusing on EHR documentation and review, 19% related to insurance and billing, 11% on tests, medications and other orders and the final 6% on clinical planning and logistics.

Even during the consultation doctors can spend up to 40% of their time taking clinical notes. Besides the extra burden that this places on health care practices, this also leads to less attention being paid to the patient and still requires 1-2 hours of admin in the evenings. (Admin being the number one cause of burnout in clinicians and too much screen time during interactions being the number one complaint by patients.)

The solution

The ability for medical practitioners to implement valuable and effective advanced technical software, such as Autoscriber, will assist with time saving, data quality and overall job satisfaction. Autoscriber is an AI engine designed to ease the effort required when creating clinical notes by turning the consultation between patient and doctor into a structured summary that includes ICD-10 codes which is the standard method of classification of diseases used by South African medical professionals    

It identifies clinical facts in real time, including medications and symptoms. It then orders and summarises the data in a format ready for import into the EHR, creating a more detailed and standardised report on each patient encounter, allowing for a more holistic patient outcome. In essence, with the introduction of Autoscriber into the South African market, CGM seeks to aid practitioners in swiftly creating precise and efficient clinical records, saving them from extensive after-hours commitments.

Dilip Naran, VP of Product Architecture at CGM SA explains: “It is clear that AI will not replace healthcare professionals, but it will augment their capabilities to provide superior patient care. Ethical considerations are important but should not override patient care or safety. The Autoscriber solution provides full control to the HCP to use, edit or discard the transcribed note ensuring that these notes are comprehensive, attributable and contemporaneous.”