top of page
MCRA-Header_BLOG.png

Viewpoints Blog

Through Member collaboration, MCRA helps drive performance improvements, deliver exceptional service, transform care, and champion the health and wellbeing of the communities we collectively serve.

Use of artificial intelligence in healthcare brings risks as well as rewards (Part 2 of 2)

Written by Bill Ahrens, Mazars

In Part 1 of the series, we addressed the various benefits of Artificial Intelligence (AI) in healthcare. Although AI holds the promise of revolutionizing healthcare, its use also entails many significant risks and challenges.

Image depicting artificial intelligence in healthcare

Risks of AI Implementations

While the current uses of AI in healthcare hold immense promise, several challenges must be addressed so that AI can be deployed and managed in a safe and responsible manner. Among the issues to be addressed are:

  • Inadequate testing or insufficiently prepared training data can cause AI’s underlying algorithms to produce erroneous and misleading results, impacting patient safety.

  • AI algorithms are susceptible to inheriting biases present in historical data, resulting in discriminatory outcomes and unequal treatment across different patient groups.

  • AI’s rapid advancement has far outpaced the development of guidelines and regulations governing its use. Evolving laws and ethical standards have not yet adequately addressed such issues as liability for errors made by AI systems, accountability for algorithmic decisions, ownership of training data, and transparency in AI-driven processes.

  • Risks of workflow disruptions exist in terms of redesigning processes that fully leverage the respective strengths of clinical staff and AI technologies.

  • Patient privacy and data security concerns exist due to the reliance of AI on vast amounts of data to make predictions and recommendations. This reliance poses significant risks to potential data breaches, unauthorized access, disclosure, and misuse of patient data. Additionally, AI inference may reveal sensitive information.

  • The AI system itself can become a target of an attack if malicious actors try to cause harm by exploiting vulnerabilities or weaknesses in the algorithm, model, or implementation. For example, an attacker could inject poisoned data into training sets resulting in incorrect predictions or unexpected behavior. In addition, alteration of patient records could lead to erroneous clinical decision-making.

  • Insiders pose a significant cybersecurity risk since they are authorized users with access to sensitive data and could misuse their privileges.

  • Malicious actors can employ AI technologies to evade traditional anti-malware systems or generate sophisticated phishing emails and social engineering attacks.

  • Healthcare organizations outsource numerous processes to third parties whose systems or personnel could unwittingly introduce vulnerabilities or security weaknesses into AI solutions, posing risks to patient data security and system integrity.


Mitigation of AI Risks

As healthcare organizations increasingly employ AI to enhance patient care and gain operational efficiencies, they must also address the associated risks and challenges. Following are some strategies to safeguard PHI and mitigate the risks of AI:


  • Governance. Establishing a solid governance framework is foundational to mitigating AI risks. Healthcare organizations need to develop appropriate policies and procedures, define clear roles and responsibilities, set guidelines for the ethical use of AI, and monitor compliance with evolving regulations and standards.

  • Algorithm testing and validation. To ensure patient safety, AI algorithms and systems must be thoroughly tested and validated for accuracy, reliability, and performance as well as monitored for potential errors or biases.

  • Bias mitigation. To reduce the risk of bias, training data must be thoroughly analyzed to identify and mitigate biases inherent in the data. Other strategies to reduce bias include algorithmic transparency and diverse representation in dataset curation.

  • Patient data privacy and security. Transparency and informed consent in data collection and processing practices must be established and adequate cyber controls implemented such as strong encryption, authentication and access controls, multi-factor authentication, least privilege and need-to-know, data anonymization, log monitoring, and privilege management.

  • Third Party Risk Management. Robust third-party risk management (TPRM) programs need to be developed, implemented, and continuously monitored. Recent industry data indicate that third parties are responsible for more than 50% of breaches, for which the organization is ultimately accountable.

  • Risk assessment. Risk assessments must be regularly performed to identify and prioritize potential risks, threats, and vulnerabilities associated with AI systems. Vulnerability testing should be routinely conducted to identify and remediate potential weaknesses before they can be exploited, and penetration testing performed at least annually to simulate real-world attacks. Defensive mechanisms include anomaly detection systems and integrity checks to detect and mitigate malicious activities.

  • Security audits. Regular audits are essential for assessing compliance with security policies and standards as well as identifying and addressing vulnerabilities. By taking proactive steps, healthcare organizations can strengthen their defenses and mitigate potential damages.

  • Training and awareness. Human error is the leading cause of cyber incidents. Healthcare organizations must provide ongoing cybersecurity training and awareness programs to all workforce members to limit potential threats such as phishing and other social engineering attacks.

  • Continuous monitoring. Cyber threats are constantly evolving, requiring organizations to actively monitor threat intelligence sources, security advisories, and industry reports to stay abreast of the latest developments. Using a cyber threat-adaptive framework that is updated regularly to stay ahead of emerging threats is another sound strategy to reduce vulnerabilities.


Evolving Standards and Guidelines

A number of standards and guidelines have been developed to further enhance AI security including:

  • ISO/IEC 27090, which addresses security requirements for AI systems and provides guidelines for securing AI components and data.

  • ENISA Framework for AI Cybersecurity Practices, which outlines best practices for securing AI systems, including topics for risk management, data protection, and transparency.

  • ISO/IEC TR 27563, which provides guidance on AI cybersecurity, risk assessment, threat modeling, and secure development practices.

  • MITRE Adversarial Threat Landscape for AI Systems (ATLAS), which contains adversary tactics and techniques based on real-world attack observations.

  • NIST AI Risk Management Framework (AI RMF), which provides guidelines to manage risks associated with AI.


Final Thoughts

As AI technologies continue to evolve, they hold the potential to drive other, yet unimagined, innovations. However, it is essential for organizations to recognize and address AI’s associated risks and challenges. Healthcare organizations must approach the integration of AI thoughtfully, ethically, and responsibly to maximize its benefits while minimizing potential risks and ensure patient safety, privacy, and ethical integrity.


  • Cybersecurity risks must be carefully managed to protect patient data, maintain trust, and ensure the integrity of healthcare systems.

  • Robust governance frameworks must be established to prioritize patient safety, address bias and discrimination, safeguard data privacy, intelligently integrate AI into workflows, and engage with patients and communities.

  • Using a risk-based approach and an industry standard risk and compliance framework with integrated AI governance standards and frequent updates – such as the HITRUST CSF – healthcare organizations can gain assurance that they have appropriately adopted and implemented controls to effectively mitigate the cybersecurity risks associated with the use of AI.

  • As AI is increasingly integrated into healthcare systems and workflows, organizations must be vigilant in identifying and addressing the threats and vulnerabilities associated with its implementation.

  • Effective cyber controls include robust authentication and access controls, encrypting sensitive data, conducting regular security audits and penetration testing, instilling a culture of security awareness, and staying abreast of emerging threats.


Considered by many to be the gold standard risk and compliance framework, due in part to its prescriptive controls, robust QC processes, and its ability to incorporate an extensive list of authoritative sources – including those related to AI – the HITRUST CSF enables clients to meet regulatory compliance requirements and obtain assurance of their cyber risk level.


The HITRUST CSF framework now allows subscribers to add “Artificial Intelligence Risk Management” requirements to their assessments to gain an understanding of any gaps in, or risks to, their AI implementations. These AI-based requirements, culled from several industry-leading standards and guidelines, assist organizations in appropriately configuring their AI system implementations and associated defenses. Finally, the HITRUST CSF requirements are updated on at least an annual basis to provide a threat-adaptive certification, ensuring the requirements address the most pressing active cyber threats.


 

5 views0 comments

Comments


bottom of page