Category: IT in Healthcare

AI Models that can Identify Patient Demographics in X-rays are Also Unfair

Photo by Anna Shvets

Artificial intelligence models often play a role in medical diagnoses, especially when it comes to analysing images such as X-rays. But these models have been found not perform as well across all demographic groups, usually faring worse on women and people of colour.

These models have also been shown to develop some surprising abilities. In 2022, MIT researchers reported that AI models can make accurate predictions about a patient’s race from their chest X-rays – something that the most skilled radiologists can’t do.

Now, in a new study appearing in Nature, the same research team has found that the models that are most accurate at making demographic predictions also show the biggest “fairness gaps”, ie having reduced accuracy diagnosing images of people of different races or genders. The findings suggest that these models may be using “demographic shortcuts” when making their diagnostic evaluations, which lead to incorrect results for women, Black people, and other groups, the researchers say.

“It’s well-established that high-capacity machine-learning models are good predictors of human demographics such as self-reported race or sex or age. This paper re-demonstrates that capacity, and then links that capacity to the lack of performance across different groups, which has never been done,” says senior author Marzyeh Ghassemi, an MIT associate professor of electrical engineering and computer science.

The researchers also found that they could retrain the models in a way that improves their fairness. However, their approached to “debiasing” worked best when the models were tested on the same types of patients they were trained on, such as patients from the same hospital. When these models were applied to patients from different hospitals, the fairness gaps reappeared.

“I think the main takeaways are, first, you should thoroughly evaluate any external models on your own data because any fairness guarantees that model developers provide on their training data may not transfer to your population. Second, whenever sufficient data is available, you should train models on your own data,” says Haoran Zhang, an MIT graduate student and one of the lead authors of the new paper.

Removing bias

As of May 2024, the FDA has approved 882 AI-enabled medical devices, with 671 of them designed to be used in radiology. Since 2022, when Ghassemi and her colleagues showed that these diagnostic models can accurately predict race, they and other researchers have shown that such models are also very good at predicting gender and age, even though the models are not trained on those tasks.

“Many popular machine learning models have superhuman demographic prediction capacity – radiologists cannot detect self-reported race from a chest X-ray,” Ghassemi says. “These are models that are good at predicting disease, but during training are learning to predict other things that may not be desirable.”

In this study, the researchers set out to explore why these models don’t work as well for certain groups. In particular, they wanted to see if the models were using demographic shortcuts to make predictions that ended up being less accurate for some groups. These shortcuts can arise in AI models when they use demographic attributes to determine whether a medical condition is present, instead of relying on other features of the images.

Using publicly available chest X-ray datasets from Beth Israel Deaconess Medical Center (BIDMC) in Boston, the researchers trained models to predict whether patients had one of three different medical conditions: fluid buildup in the lungs, collapsed lung, or enlargement of the heart. Then, they tested the models on X-rays that were held out from the training data.

Overall, the models performed well, but most of them displayed “fairness gaps” – that is, discrepancies between accuracy rates for men and women, and for white and Black patients.

The models were also able to predict the gender, race, and age of the X-ray subjects. Additionally, there was a significant correlation between each model’s accuracy in making demographic predictions and the size of its fairness gap. This suggests that the models may be using demographic categorisations as a shortcut to make their disease predictions.

The researchers then tried to reduce the fairness gaps using two types of strategies. For one set of models, they trained them to optimise “subgroup robustness,” meaning that the models are rewarded for having better performance on the subgroup for which they have the worst performance, and penalised if their error rate for one group is higher than the others.

In another set of models, the researchers forced them to remove any demographic information from the images, using “group adversarial” approaches. Both strategies worked fairly well, the researchers found.

“For in-distribution data, you can use existing state-of-the-art methods to reduce fairness gaps without making significant trade-offs in overall performance,” Ghassemi says. “Subgroup robustness methods force models to be sensitive to mispredicting a specific group, and group adversarial methods try to remove group information completely.”

Not always fairer

However, those approaches only worked when the models were tested on data from the same types of patients that they were trained on, eg from BIDMC.

When the researchers tested the models that had been “debiased” using the BIDMC data to analyse patients from five other hospital datasets, they found that the models’ overall accuracy remained high, but some of them exhibited large fairness gaps.

“If you debias the model in one set of patients, that fairness does not necessarily hold as you move to a new set of patients from a different hospital in a different location,” Zhang says.

This is worrisome because in many cases, hospitals use models that have been developed on data from other hospitals, especially in cases where an off-the-shelf model is purchased, the researchers say.

“We found that even state-of-the-art models which are optimally performant in data similar to their training sets are not optimal – that is, they do not make the best trade-off between overall and subgroup performance – in novel settings,” Ghassemi says. “Unfortunately, this is actually how a model is likely to be deployed. Most models are trained and validated with data from one hospital, or one source, and then deployed widely.”

The researchers found that the models that were debiased using group adversarial approaches showed slightly more fairness when tested on new patient groups than those debiased with subgroup robustness methods. They now plan to try to develop and test additional methods to see if they can create models that do a better job of making fair predictions on new datasets.

The findings suggest that hospitals that use these types of AI models should evaluate them on their own patient population before beginning to use them, to make sure they aren’t giving inaccurate results for certain groups.

Datacentres Form Part of Healthcare Critical Systems – Carrying the Load and so Much More

Photo by Christina Morillo

By Ben Selier, Vice President: Secure Power, Anglophone Africa at Schneider Electric

The adage, knowledge is king couldn’t be more applicable when it comes to the collection and utilisation of data.  And at the heart of this knowledge and resultant information lies the datacentre. Businesses and users count on datacentres, and more so in critical services such as healthcare.

Many hospitals today rely heavily on electronic health records (EHR), and this information resides and is backed up in on-premises datacentres or in the cloud. Datacentres are therefore a major contributor to effective and modernised healthcare.

There are several considerations when designing datacentres for healthcare. For one, hospitals operate within stringent legislation when it comes to the protection of patient information.  The National Health Act (No. 61 of 2003), for example, stipulates that information must not be given to others unless the patient consents or the healthcare practitioner can justify the disclosure.

Datacentres form part of critical systems

To add an extra layer of complexity, in South Africa, datacentres should feature built-in continuous uptime and energy backup due to the country’s unstable power supply.  Hospitals must therefore be designed to be autonomous from the grid, especially when they provide emergency and critical care.

Typically, datacentres are classified in tiers, with the Uptime Institute citing that a Tier-4 datacentre provides 99.995% availability, annual downtime of 0.4 hours, full redundancy, and power outage protection of 96 hours.

In healthcare and when one considers human lives, downtime is simply not an option. And whilst certain healthcare systems and its resultant availability are comparable to a typical Tier-3 or Tier-4 scenario, critical systems in hospitals carry a higher design consideration and must run 24/7 with immediate availability.

In healthcare, the critical infrastructure of a hospital enjoys priority.  What this means is the datacentre is there to protect the IT system which in turn ensures the smooth running of these critical systems and equipment.  There is therefore a delicate balance between the critical systems and infrastructure, and the datacentre, one can’t exist without the other.

Design considerations

To realise the above, hospitals must feature a strong mix of alternative energy resources such as backup generators, uninterrupted power supply (UPS) and renewables such as rooftop solar.

Additionally, like most organisations, storage volume and type and cloud systems will also vary from hospital to hospital. To this end, datacentre design for hospitals is anything but cookie cutter; teams need to work closely with the hospital whilst meeting industry standards for healthcare.

When designing healthcare facilities system infrastructure, the following should also be considered:

  • Software like Building Management Systems (BMS) are not just about building efficiency but also offer benefits such as monitoring and adjusting indoor conditions like temperature control, humidity, and air quality.

The BMS contributes to health and safety and critical operations in hospitals whilst also enabling patient comfort.

  • Maintenance – both building and systems maintenance transcend operational necessity and become a matter of life or death.
  • As mentioned, generators are essential when delivering continuous power which means enough fuel must be stored to run it. Here, hospitals must store fuel safely and in compliance with stringent regulations. In South Africa, proactively managing the refuelling timelines is also critical.  The response times of refuelling these (fuel) bunkers can be severely hindered by issues such as traffic congestion as a result of outages and lights now working.

Selecting the right equipment for hospitals is therefore a delicate balance between technological advancement and safety. For instance, while lithium batteries offer many benefits, when used in hospitals, it is paramount that it is also stored in dry, cool and safe location.

Here, implementing an extinguishing system is a must to alleviate any potential damage from fire or explosions.  That said, lithium batteries are generally considered safe to use but it’s important to be cognisant of its potential safety hazards.

Ultimately, hospitals carry the added weight of human lives which means the design of critical systems require meticulously planning and executed.

AI Analyses Fitbit Data to Predict Spine Surgery Outcomes

Photo by Barbara Olsen on Pexels

Researchers who had been using Fitbit data to help predict surgical outcomes have a new method to more accurately gauge how patients may recover from spine surgery.

Using machine learning techniques developed at the AI for Health Institute at Washington University in St. Louis, Chenyang Lu, the Fullgraf Professor in the university’s McKelvey School of Engineering, collaborated with Jacob Greenberg, MD, assistant professor of neurosurgery at the School of Medicine, to develop a way to predict recovery more accurately from lumbar spine surgery.

The results show that their model outperforms previous models to predict spine surgery outcomes. This is important because in lower back surgery and many other types of orthopaedic operations, the outcomes vary widely depending on the patient’s structural disease but also varying physical and mental health characteristics across patients. The study is published in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

Surgical recovery is influenced by both preoperative physical and mental health. Some people may have catastrophising, or excessive worry, in the face of pain that can make pain and recovery worse. Others may suffer from physiological problems that cause worse pain. If physicians can get a heads-up on the various pitfalls for each patient, that will allow for better individualized treatment plans.

“By predicting the outcomes before the surgery, we can help establish some expectations and help with early interventions and identify high risk factors,” said Ziqi Xu, a PhD student in Lu’s lab and first author on the paper.

Previous work in predicting surgery outcomes typically used patient questionnaires given once or twice in clinics that capture only one static slice of time.

“It failed to capture the long-term dynamics of physical and psychological patterns of the patients,” Xu said. Prior work training machine learning algorithms focus on just one aspect of surgery outcome “but ignore the inherent multidimensional nature of surgery recovery,” she added.

Researchers have used mobile health data from Fitbit devices to monitor and measure recovery and compare activity levels over time but this research has shown that activity data, plus longitudinal assessment data, is more accurate in predicting how the patient will do after surgery, Greenberg said.

The current work offers a “proof of principle” showing, with the multimodal machine learning, doctors can see a much more accurate “big picture” of all the interrelated factors that affect recovery. Proceeding this work, the team first laid out the statistical methods and protocol to ensure they were feeding the AI the right balanced diet of data.

Prior to the current publication, the team published an initial proof of principle in Neurosurgery showing that patient-reported and objective wearable measurements improve predictions of early recovery compared to traditional patient assessments. In addition to Greenberg and Xu, Madelynn Frumkin, a PhD psychological and brain sciences student in Thomas Rodebaugh’s laboratory in Arts & Sciences, was co-first author on that work. Wilson “Zack” Ray, MD, the Henry G. and Edith R. Schwartz Professor of neurosurgery in the School of Medicine, was co-senior author, along with Rodebaugh and Lu. Rodebaugh is now at the University of North Carolina at Chapel Hill.

In that research, they show that Fitbit data can be correlated with multiple surveys that assess a person’s social and emotional state. They collected that data via “ecological momentary assessments” (EMAs) that employ smart phones to give patients frequent prompts to assess mood, pain levels and behaviour multiple times throughout day.

We combine wearables, EMA -and clinical records to capture a broad range of information about the patients, from physical activities to subjective reports of pain and mental health, and to clinical characteristics,” Lu said.

Greenberg added that state-of-the-art statistical tools that Rodebaugh and Frumkin have helped advance, such as “Dynamic Structural Equation Modeling,” were key in analyzing the complex, longitudinal EMA data.

For the most recent study they then took all those factors and developed a new machine learning technique of “Multi-Modal Multi-Task Learning (M3TL)” to effectively combine these different types of data to predict multiple recovery outcomes.

In this approach, the AI learns to weigh the relatedness among the outcomes while capturing their differences from the multimodal data, Lu adds.

This method takes shared information on interrelated tasks of predicting different outcomes and then leverages the shared information to help the model understand how to make an accurate prediction, according to Xu.

It all comes together in the final package producing a predicted change for each patient’s post-operative pain interference and physical function score.

Greenberg says the study is ongoing as they continue to fine tune their models so they can take these more detailed assessments, predict outcomes and, most notably, “understand what types of factors can potentially be modified to improve longer term outcomes.”

Source: Washington University in St. Louis

Earn CPD Points with EthiQal’s Webinar on Record Keeping

On Wednesday 5 June at 18:00, EthiQal cordially invites you to attend their ethics webinar, “Documenting care: Effective record-keeping and requests for records”.

Hosted by Dr Hlombe Makuluma, Medicolegal Advisor at EthiQal, this webinar will be co-presented by two admitted attorneys, Mashooma Parker and Jessica Viljoen, who are both legal advisors within the claims team at EthiQal. The 90-minute session will cover compliance for record-keeping requirements as well as dealing with requests for patient records from patients and third parties.

Participants will gain valuable insights to ethically enhance their practice’s visibility and reach, fostering responsible and compliant advertising practices.

Mashooma Parker is a skilled Legal Advisor within the Claims & Legal team at EthiQal, specialising in medical malpractice. With a strong background in the legal field and a passion for assisting healthcare practitioners, Mashooma brings a wealth of expertise to navigate the complexities that arise with patients and third parties. Hosting the first topic, She will cover the requirements for healthcare practitioners to ensure quality record-keeping compliance with Booklet 9 of the HPCSA’s Ethical Guidelines.

Jessica Viljoen is an admitted attorney and legal advisor specialising in professional indemnity insurance for healthcare practitioners, and medical malpractice law. With her extensive experience within the medico-legal space, including her years of litigation experience, Jessica leverages her industry knowledge to provide legal advice and assistance to all specialties of medical practitioners throughout South Africa. She will present the second part of the talk, which will deal with Patient and Third-party requests for patient records and how to ensure compliance with the Promotion of Access to Information Act 2 of 2000.

The speakers will offer some useful tips from a medico-legal risk management perspective for health practitioners to be cognisant of, as well as to work through some practical examples to illustrate the importance of the topic.

At least one hour’s attendance on the Zoom Platform is required to earn CPD points, and for those unable to watch it live, a recording will be made available.

Click here to register now

Best Practice in POPIA Compliance in TeleHealth

By Wayne Janneker, Executive for Mining Industrial and Health Management at BCX

In the intricate field of healthcare, where privacy and patient’s data security are of utmost importance the Protection of Personal Information Act (POPIA) emerges as a cornerstone legislation. Specifically crafted to safeguard individual privacy, POPIA carries profound implications for the healthcare sector, particularly in the protection of a patient’s medical data.

POPIA establishes a framework for healthcare professionals, mandating that they exert reasonable efforts to inform patients before obtaining personal information from alternative sources. The Act places significant emphasis on the secure and private management of patient’s medical records, instilling a sense of responsibility within the healthcare community.

Section 26 of the Act unequivocally prohibits the processing of personal health information, yet Section 32(1) introduces a caveat. This section extends exemptions to medical professionals and healthcare institutions, but only under the condition that such information is essential for providing proper treatment and care pathways. It’s a delicate balance, ensuring the patient’s well-being while respecting the boundaries of privacy.

A breach of POPIA transpires when personal information is acquired without explicit consent, accessed unlawfully, or when healthcare professionals fall short of taking reasonable steps to prevent unauthorised disclosure, potentially causing harm or distress to the patient. The consequences for non-compliance are severe, ranging from substantial monetary compensation to imprisonment.

For healthcare providers, especially those venturing into the realm of telehealth services, navigating POPIA compliance is of critical importance. Good clinical practices become the guiding principles in this journey of upholding patient confidentiality and privacy.

Let’s delve into the essentials of ensuring privacy in healthcare, where understanding the nuances of privacy laws becomes the bedrock for healthcare providers. It’s not merely about keeping up with regulations; it’s about aligning practices with the legal landscape, creating a solid foundation for what follows.

When we shift the focus to telehealth, selecting platforms tailored to meet POPIA requirements becomes even more crucial—it’s imperative. Envision these platforms as protectors of patient information, featuring end-to-end encryption and secure data storage, creating a fortress around sensitive data. But we can’t merely stop there; we need to be proactive. Regular risk assessments become the secret weapon, requiring healthcare providers to stay ahead of the game, constantly evolving, and nipping potential security threats in the bud.

Managing the human element—the healthcare team—becomes significant. Educating them about compliance, data security, and the significance of patient confidentiality adds another layer of protection. When everyone comprehends their role in maintaining compliance, it’s akin to having a team of protectors ensuring the safety of patient data.

Establishing clear policies and procedures around telehealth use, patient consent, and the secure handling of patient data is our compass for ethical and legal navigation. It’s not just about ticking boxes; it’s about creating a roadmap that ensures we’re on the right path.

Informed consent is the cornerstone of this journey. It’s about building trust with patients by transparently communicating through secure communication channels, encryption of patient data, stringent access controls, regular internal audits, and airtight data breach response plans, all of which forms part of a strategy, ensuring a state of readiness to tackle any challenges that come our way.

In this dynamic landscape, technology can’t be static. Regular updates to telehealth technology, software, and security measures are our way of staying in sync with evolving threats and regulations.

Healthcare providers aren’t necessarily experts on the Act or technology, which is why consulting with legal experts specialising in healthcare can provide accurate information on which to base decisions. It ensures that practices aren’t just compliant but resilient against any legal scrutiny that may come their way.

The final and most crucial element is the patient. Their feedback is like a map, guiding healthcare providers to areas of improvement. By monitoring and seeking insights from patients regarding their telehealth experiences, providers uncover ways to enhance their compliance measures.

In embracing these best practices and remaining vigilant to changes, healthcare practitioners and providers can navigate POPIA compliance successfully and deliver high-quality health and telehealth services. It’s a commitment to patient privacy, data security, and the evolving landscape of healthcare regulations that will propel the industry forward.

When it Comes to Healthcare, AI Still Needs Human Supervision

Photo by Tara Winstead on Pexels

State-of-the-art artificial intelligence systems known as large language models (LLMs) are poor medical coders, according to researchers at the Icahn School of Medicine at Mount Sinai. Their study, published in NEJM AI, emphasises the necessity for refinement and validation of these technologies before considering clinical implementation.

The study extracted a list of more than 27 000 unique diagnosis and procedure codes from 12 months of routine care in the Mount Sinai Health System, while excluding identifiable patient data. Using the description for each code, the researchers prompted models from OpenAI, Google, and Meta to output the most accurate medical codes. The generated codes were compared with the original codes and errors were analysed for any patterns.

The investigators reported that all of the studied large language models, including GPT-4, GPT-3.5, Gemini-pro, and Llama-2-70b, showed limited accuracy (below 50%) in reproducing the original medical codes, highlighting a significant gap in their usefulness for medical coding. GPT-4 demonstrated the best performance, with the highest exact match rates for ICD-9-CM (45.9%), ICD-10-CM (33.9%), and CPT codes (49.8%).

GPT-4 also produced the highest proportion of incorrectly generated codes that still conveyed the correct meaning. For example, when given the ICD-9-CM description “nodular prostate without urinary obstruction,” GPT-4 generated a code for “nodular prostate,” showcasing its comparatively nuanced understanding of medical terminology. However, even considering these technically correct codes, an unacceptably large number of errors remained.

The next best-performing model, GPT-3.5, had the greatest tendency toward being vague. It had the highest proportion of incorrectly generated codes that were accurate but more general in nature compared to the precise codes. In this case, when provided with the ICD-9-CM description “unspecified adverse effect of anesthesia,” GPT-3.5 generated a code for “other specified adverse effects, not elsewhere classified.”

“Our findings underscore the critical need for rigorous evaluation and refinement before deploying AI technologies in sensitive operational areas like medical coding,” says study corresponding author Ali Soroush, MD, MS, Assistant Professor of Data-Driven and Digital Medicine (D3M), and Medicine (Gastroenterology), at Icahn Mount Sinai. “While AI holds great potential, it must be approached with caution and ongoing development to ensure its reliability and efficacy in health care.”

Source: The Mount Sinai Hospital / Mount Sinai School of Medicine

AI Helps Clinicians to Assess and Treat Leg Fractures

Photo by Tima Miroshnichenko on Pexels

By using artificial intelligence (AI) techniques to process gait analyses and medical records data of patients with leg fractures, researchers have uncovered insights on patients and aspects of their recovery.

The study, which is published in the Journal of Orthopaedic Research, uncovered a significant association between the rates of hospital readmission after fracture surgery and the presence of underlying medical conditions. Correlations were also found between underlying medical conditions and orthopaedic complications, although these links were not significant.

It was also apparent that gait analyses in the early postinjury phase offer valuable insights into the injury’s impact on locomotion and recovery. For clinical professionals, these patterns were key to optimising rehabilitation strategies.

“Our findings demonstrate the profound impact that integrating machine learning and gait analysis into orthopaedic practice can have, not only in improving the accuracy of post-injury complication predictions but also in tailoring rehabilitation strategies to individual patient needs,” said corresponding author Mostafa Rezapour, PhD, of Wake Forest University School of Medicine. “This approach represents a pivotal shift towards more personalised, predictive, and ultimately more effective orthopaedic care.”

Dr. Rezapour added that the study underscores the critical importance of adopting a holistic view that encompasses not just the mechanical aspects of injury recovery but also the broader spectrum of patient health. “This is a step forward in our quest to optimize rehabilitation strategies, reduce recovery times, and improve overall quality of life for patients with lower extremity fractures,” he said.

Source: Wiley

Admin and Ethics should be the Basis of Your Healthcare AI Stratetgy

Technology continues to play a strong role in shaping healthcare. In 2023, the focus was on how Artificial Intelligence (AI),  became significantly entrenched in patient records, diagnosis and care. Now in 2024 the focus is on the ethical aspects of AI.  Many organisations including practitioner groups, hospitals and medical associations are putting together AI Codes of Conduct, with new legislation planning to be passed in countries such as the USA.

The entire patient journey has benefited from the use of AI, in tangible ways that we can understand. From online bookings, the sharing of information with electronic health records, keyword diagnosis, sharing of visual scans, e-scripts, easy claims, SMS’s and billing, are all examples of how software systems are incorporated into practices to facilitate a streamlined experience for both the patient and doctor. *But although 75% of medical professionals agree on the transformation abilities of AI, only 6% have implemented an AI strategy.

Strategies need to include ethical considerations

CompuGroup Medical South Africa, (CGM SA), a leading international MedTech company that has spent over 20 years designing software solutions for the healthcare industry, has identified one main area that seems to constantly be the topic for ethical consideration.

This is the sharing of patient electronic health records or EHR’s. On one hand the wealth of information provided in each EHR – from a patient’s medical history, demographics, their laboratory test results over time, medicine prescribed, a history of medical procedures, X-rays to any medical allergies – offers endless opportunities for real time patient care. On the other hand, there seems to be a basic mistrust of how these records will be shared and stored, no one wants their personal medical information to end up on the internet.

But there’s also the philosophical view that although you might not want your info to be public record, it still has the ability to benefit the care of thousands of people. If we want a learning AI system that adapts as we do, if we want a decision making support system that is informed by past experiences, then the sharing of data should be viewed as a tool and no longer a privacy barrier.

Admin can cause burnout

Based on their interactions with professionals, CGM has informally noted that healthcare practices spend 73% of their time dealing with administrative tasks. This can be broken down into 38% focusing on EHR documentation and review, 19% related to insurance and billing, 11% on tests, medications and other orders and the final 6% on clinical planning and logistics.

Even during the consultation doctors can spend up to 40% of their time taking clinical notes. Besides the extra burden that this places on health care practices, this also leads to less attention being paid to the patient and still requires 1-2 hours of admin in the evenings. (Admin being the number one cause of burnout in clinicians and too much screen time during interactions being the number one complaint by patients.)

The solution

The ability for medical practitioners to implement valuable and effective advanced technical software, such as Autoscriber, will assist with time saving, data quality and overall job satisfaction. Autoscriber is an AI engine designed to ease the effort required when creating clinical notes by turning the consultation between patient and doctor into a structured summary that includes ICD-10 codes which is the standard method of classification of diseases used by South African medical professionals    

It identifies clinical facts in real time, including medications and symptoms. It then orders and summarises the data in a format ready for import into the EHR, creating a more detailed and standardised report on each patient encounter, allowing for a more holistic patient outcome. In essence, with the introduction of Autoscriber into the South African market, CGM seeks to aid practitioners in swiftly creating precise and efficient clinical records, saving them from extensive after-hours commitments.

Dilip Naran, VP of Product Architecture at CGM SA explains: “It is clear that AI will not replace healthcare professionals, but it will augment their capabilities to provide superior patient care. Ethical considerations are important but should not override patient care or safety. The Autoscriber solution provides full control to the HCP to use, edit or discard the transcribed note ensuring that these notes are comprehensive, attributable and contemporaneous.”

Virtual Reality Sessions can Lessen Cancer Pain, Trial Shows

Photo by Bradley Hook on Pexels

Hospitalised cancer patients who engaged in a 10-minute virtual reality (VR) session experienced significantly lessened pain in a trial published in CANCER, a peer-reviewed journal of the American Cancer Society. Participants still experienced sustained benefits a day later.

Most cancer patients experience pain, and treatment usually involves medications including opioids. VR sessions that immerse the user in new environments have been shown to be a noninvasive and nonpharmacologic way to lessen pain in different patient populations, but data are lacking in individuals with cancer. To investigate, Hunter Groninger, MD, of Georgetown University School of Medicine and MedStar Health and his colleagues randomized 128 adults with cancer with moderate or severe pain to a 10-minute immersive VR intervention involving calm, pleasant environments or to a 10-minute two-dimensional guided imagery experience on an iPad tablet.

The investigators found that both interventions lessened pain, but VR sessions had a greater impact. Based on patient-reported scores from 0 to 10, patients in the guided imagery group reported an average decrease of 0.7 in pain scores, whereas those in the VR group reported an average drop of 1.4. Twenty four hours after the assigned intervention, participants in the VR group reported sustained improvement in pain severity (1.7 points lower than baseline before the VR intervention) compared with participants in the guided imagery group (only 0.3 points lower than baseline before the active control intervention).

Participants assigned to the VR intervention also reported improvements related to pain “bothersomeness” (how much the pain bothered them, regardless of the severity of the pain) and general distress, and they expressed satisfaction with the intervention. 

“Results from this trial suggest that immersive VR may be a useful non-medication strategy to improve the cancer pain experience,” said Dr Groninger. “While this study was conducted among hospitalized patients, future studies should also evaluate VR pain therapies in outpatient settings and explore the impact of different VR content to improve different types of cancer-related pain in different patient populations. Perhaps one day, patients living with cancer pain will be prescribed a VR therapy to use at home to improve their pain experience, in addition to usual cancer pain management strategies like pain medications.”

Source: Wiley

The Digital Nurse: Redefining the Future of Healthcare in South Africa

Sandra Sampson, Director at Allmed

By Sandra Sampson, Director at Allmed

The South African healthcare landscape is undergoing a transformative shift, driven by the rapid advancement of technology. At the forefront of this change is the rise of the “digital nurse,” a testament to the increasing integration of technology into the nursing profession. This transformation is not only streamlining processes; it is addressing critical challenges like the nation’s nurse shortage while ultimately improving patient care.

Embracing convenience and accessibility

Virtual platforms have become commonplace in the nursing world, facilitating efficient and accessible professional development for nurses through online meetings, networking opportunities, and educational resources. This fosters a more connected and knowledgeable nursing community, better equipped to serve patients.

Telehealth consultations, another facet of digital nursing currently revolutionising patient care, provide convenient and accessible medical consultations from the comfort of one’s home, eliminating long wait times and unnecessary travel.

Mitigating nurse shortages and ensuring quality care

South Africa grapples with a significant nurse shortage, placing a strain on the healthcare system to which digital nursing offers a practical potential solution. By leveraging technology, nurses can effectively manage larger patient volumes, reducing the burden on the existing workforce and optimising resource allocation. Remote monitoring systems and AI-powered tools further empower nurses by providing real-time patient data and facilitating early intervention, ultimately improving the quality of care delivered.

Additionally, embracing technology ensures that patients, even in underserved areas, receive quality care. The efficiency gained through virtual platforms allows nurses to allocate their time effectively, addressing minor health concerns remotely and reducing the strain on healthcare facilities for non-emergency cases.

However, it must be pointed out that although leveraging technology allows nurses to effectively manage larger patient volumes, which can alleviate the strain on the current system, this doesn’t necessarily mean fewer nurses are needed, but rather that technology empowers existing numbers to reach a wider patient base to deliver more efficient, personalised care.

Evolving alongside technology: the digital nurse of tomorrow

As the healthcare industry embraces digital technologies, the role of the nurse will continue to expand. While traditional nursing skills will remain essential, the “digital nurse” of the future must possess additional competencies.  Acquiring proficiency in digital tools and equipment, along with the capability to interpret and analyse digital data, will be crucial for delivering effective patient care. However, the most critical attribute for the digital nurse will be the willingness to adapt and embrace constant technological advancements. This will require a mindset shift that comes with acknowledging that traditional methods might not be sufficient in the face of evolving patient needs.

The challenges and opportunities in change

While the adoption of digital nursing brings numerous benefits, challenges remain. Resistance from individuals accustomed to traditional healthcare practices is one hurdle. However, with the younger generation being more adaptable, the shift towards digital nursing is expected to gain wider acceptance as technology advances. To ensure the success of this digital-first healthcare, it will be necessary to focus our attention on upskilling, which means recognising that continuous training and development programs are vital for nurses to remain proficient in the face of change.

On the flip side, a change in perspective from nursing professionals themselves will be necessary. This means embracing a growth mindset and being open towards new technologies to adapt and thrive in the digital age. Lastly, healthcare professionals as a whole need to bear in mind that transformation is essential to meet the evolving needs of patients, which includes catering to a growing preference for digital healthcare solutions. Continuing to meet the needs of patients is the only guaranteed way for nursing professionals to ensure their relevance in the future. By embracing technology and fostering a culture of continuous learning, South Africa can empower its nurses to become the digital healthcare leaders of tomorrow.