Artificial intelligence (AI) is a vast field that is developing fast. Its possible applications can be confusing and intimidating for patients. At the same time, we face challenges and needs in healthcare where AI can provide support and offers solutions. Emily Lewis, a digital health innovator, knows both the industry’s and the patient’s perspective and will shed light on different aspects of the topic. She is confident that we can use technology to re-humanize healthcare and bring connection.

In the third part of our series about “AI in Healthcare”, Emily addresses the question of governance and ethical guidelines for the use of AI in healthcare as well as privacy and data protection.

Applying principles of governance and ethics in healthcare AI practices is crucial given the sensitive and personal nature of health data, as well as the potential for AI to significantly impact patient care and outcomes. Here is an overview of the aspects that developers of AI systems need to take into account:

Ethical Principles

Clear ethical principles from which to govern need to be defined to build trust with the end users. Examples of these principles include:

  • Transparency: AI systems and their decision-making processes should be explainable and understandable to patients, healthcare providers, and regulators.
  • Beneficence and Non-maleficence: AI should be used to benefit patients and avoid harm.
  • Justice and Fairness: AI systems should be designed and operated to ensure that they are fair and do not discriminate.
  • Patient Autonomy and Consent: Patients must remain in control of their own healthcare decisions, and data should only be used with informed consent.
  • Privacy and Confidentiality: Patient data should be handled with the utmost care, and privacy should be maintained rigorously.

Governance Structures

A multidisciplinary governance committee ensures that the right partners are at the table to create governance structures. The group should include medical professionals, data scientists, ethicists, patient advocates, and legal experts. An example of an AI systems advisory committee within a health system environment may include stakeholders such as the Chief Operating Officer (COO), Ethics and Compliance, Chief Information Security Officer (CISO), Legal, and the Chief Strategy Officer (CSO) to make combined decisions and have a feedback loop. This group will establish infrastructure, protocols, and standards for the development, validation, and deployment of AI in healthcare settings.

(source: Nhan/Adobe Stock)

Data Privacy and Security

Much of the procedures in place should revolve around data privacy and security. Companies must ensure their customer data is stored, transmitted, and managed with a focus on security. Data should be encrypted in transit and at rest, with ongoing monitoring and alerting configured. Personally Identifiable Information (PII)/Protected Health Information (PHI) should not be stored. Platforms should meet a variety of stringent requirements for certification.

Patients should have control over how their data is used when they opt in to using solutions and should be able to opt out at any time for any reason.

Data Quality

Data quality needs to be ensured and biases need to be managed in the datasets used for training and validation. It is important to ensure that data used to train and test AI algorithms is collected and stored securely and responsibly, in compliance with relevant regulations. Advanced encryption standards (AES) where there is encryption at rest and in motion, data governance, data masking, and data loss prevention (DLP) need to be utilized. Anti-malware, intrusion detection, and firewalls have to be employed.

Infrastructure Security

Infrastructure security is also paramount. Those in charge of this aspect need to implement secure configurations, conduct periodic vulnerability assessments and address them regularly. Encryption, backups, configuration audits, and role-based identity and access management (e.g. virtual private networks (VPNs), multi-factor authentication) are important as well as having a security operations center (SOC) maintaining a 24/7 security team, monitoring services, and incident management.

Human-Centered Design

Partnership with those who will be using the product is paramount to make sure that the design is human-centered. It also makes for a shared responsibility and excitement. This includes also people not within the traditional care paradigm such as payers and regulators.

(source: mpix-foto/Adobe Stock)

Validation and Testing Processes

Rigorous validation and testing processes have to be implemented, ensuring that AI algorithms are safe, effective, and perform as intended. Those in charge need to monitor for biases and disparate impacts across different patient groups, and iteratively improve the algorithms based on these findings. Tools and techniques need to be developed for explaining the outputs of AI systems in terms that healthcare professionals and patients can understand. The capabilities and limitations of the AI system have to be clearly documented, and this information needs to be made accessible.

It goes without saying that each AI product intended to be used in healthcare is fully compliant from both a legal and regulatory perspective.

Training, Interpretation and Ethical Considerations

With each tool, healthcare professionals need to be trained on the use of the tool, the interpretation of its outputs, and the ethical considerations surrounding its use. Engagement with the broader public and clear and transparent communication about how AI is being used in healthcare is important, addressing concerns and misconceptions proactively. Patients also need to be educated about how AI is being used in their care, and informed consent has to be obtained where necessary.

Continuous Monitoring and Auditing

Each tool needs continuous monitoring and auditing as to understand how people are using the tool and engaging with it. Mechanisms for collecting feedback from healthcare professionals, patients, and other stakeholders need to be in place. This feedback is used to continuously improve the AI systems and their governance structures. Ongoing monitoring of AI systems ensures they are performing as expected and helps to quickly identify any issues.

The use of AI systems needs to be regularly audited, both internally and through third-party assessments, to ensure compliance with ethical principles and relevant regulations. Plans for handling any adverse events or outcomes related to the use of AI need to be developed and maintained, including clear lines of accountability and action steps.

Conclusion

Ultimately, privacy and security of health data, accessibility and usability, and human touch are important considerations in the deployment of AI solutions in healthcare. By systematically addressing these areas, healthcare organizations can work towards responsible, ethical, and effective use of AI, prioritizing patient well-being and societal values throughout the AI lifecycle.

Related Posts

Artificial intelligence (AI) is a vast field that is developing fast. Its possible applications can be confusing and intimidating for...
Artificial intelligence (AI) is a vast field that is developing fast. Its possible applications can be confusing and intimidating for...
Artificial intelligence (AI) is a vast field that is developing fast. Its possible applications can be confusing and intimidating for...

Comments

Share your opinion with us and leave a comment below!