Ethical Challenges and Governance in the Integration of Digital Health Technologies and AI
The rise of digital health technologies has led to the generation of both qualitative and quantitative “big data” such as digital traces of our everyday health behaviors. This data holds valuable information that could potentially benefit patients and caregivers. However, the rapid integration of digital health technologies and artificial intelligence (AI) into research and clinical practice raises significant ethical concerns that need to be addressed. Literature in ethics of AI in health suggests that the governance of digital health research is currently a “wild west” with various stakeholders involved, including technology developers, funders, researchers, participants, and journal editors. The regulatory controls and standards are not currently on par to guide this convergence in the health ecosystem. Several gaps and opportunities exist in the digital health research landscape. Firstly, there are disciplinary and sector challenges, as technology makers may not understand patients’ needs, computational scientists may train AI on unrepresentative datasets, and clinicians may struggle to make sense of AI-driven decisions. Secondly, there is a need to improve digital and technology literacy among various stakeholders, including policymakers, educators, regulators, clinicians, patients, and caregivers. Thirdly, there is a lack of consistent ethical and regulatory standards for evaluating and testing new digital health technologies and AI systems. Regulatory bodies and technologists may lack the necessary expertise or training to conduct appropriate risk assessments and ensure ethical practices.
To address these gaps, several initiatives have emerged to examine the ethical, legal, and social implications (ELSI) of digital health and AI technologies. Groups like the Connected and Open Research Ethics (CORE) initiative, Pervasive Data Ethics for Computational Research (PERVADE), and Mobile Health ELSI (mHealthELSI) project aim to inform policy and governance in this largely unregulated space. However, many of these efforts are broad in scope and may lack specificity for digital health applications. While regulation frameworks are slowly emerging, stakeholders must recognize their ethical obligation to uphold the integrity of digital health research in this unregulated space. To address these challenges, literature points towards a digital health decision-making framework with five main domains: Participant Privacy, Risks and Benefits, Access and Usability, Data Management, and Ethical Principles. Along theses lines, implementing digital health tools requires careful consideration of ethical principles, risk/benefit assessments, privacy protection, equitable access and usability, and sound data management practices. Collective action is needed to establish ethical standards and accountability measures to ensure the responsible development and use of digital health technologies. This document is a first effort to get an outlook of ethical considerations in digital health and precision medicine research projects. The Research Ethics Committees (RECs) emphasize the importance of inclusivity and equity in digital health studies, below we number some of the points that could be brought up RECs.
Considerations
Equity
- Equity is defined by equal access to digital technology and healthcare, equity in treatment, non-discrimination, data ownership, and empowerment. Here the aim is to ensure that precision medicine and digital health solutions are accessible to all populations, including underprivileged and minority groups.
Participant Safety and Support
- A significant portion of discussions revolves around Public and Patient Involvement (PPI) in study design and the development of comprehensible materials. This involvement helps address concerns about participant burden and frustration with the technology. Amendments to recruitment protocols are often requested to reduce participant pressure and ensure voluntary participation. Safety measures for control groups and sensitive populations, with well-considered arrangements, are required. Clear and user-friendly instructions for technology usage are essential, and studies involving high-risk devices must undergo safety checks before approval.
Conflict of Interest
- Applicants must disclose all affiliations with sponsors and ensure that financially affiliated researchers do not consent patients. RECs place emphasis on clarifying relationships between researchers, organizations, and study locations, especially when submitted to Higher Education Institution RECs.
Cost-benefit
- The cost-benefit ratio of digital health solutions is another important issue, as some professionals argue that other health interventions might offer greater benefits and deserve better funding. Reallocating resources towards improving living conditions and health-related behaviors, could potentially yield more significant health benefits. Despite these concerns, some studies indicate that the digital health solution is at least as cost-effective as conventional care, warranting further investigation into its economic viability.
Harm Prevention
- Harm prevention is critical, with most applications encountering barriers related to unforeseen events such as participant non-adherence or emergencies. Mechanisms must be in place to detect and address these issues with a consideration for high-risk patients. Clear reaction protocols for unexpected events, such as device failures, are essential. Proper training for staff and clinicians, sterilization of materials, and listing device contraindications are emphasized to avert potential risks.
Data Governance
- Data management, including storage, access, and transfer through pipelines where developers/data-managers de-identify data, clarify device ownership, and justify their storage/compute methods is part of this aspect. Anonymization and confidentiality strategies must be detailed, distinguishing between pseudonymous and anonymous data. Video and audio recording justifications are necessary, particularly regarding the duration of storage and sharing with sponsors. Clear procedures for handling data upon participant’s request for data withdrawal are required. Finally secure data transfer protocols, especially in international collaborations are necessary.
Fairness of AI and algorithms
- Concerns about the fairness of AI and algorithms in digital health primarily revolve around bias, uncertainty, and transparency. AI systems can inherit biases from the data they are trained on, potentially leading to unequal treatment of different patient groups. This bias can exacerbate existing health disparities, particularly affecting marginalized communities. Along the same lines, transparency in how algorithms make decisions is crucial, yet many AI models are “black boxes,” making it difficult for healthcare providers and patients to understand and trust their outputs.
conclusion
The integration of digital health technologies into clinical and healthcare managment practice presents both opportunities, as these technologies can transform healthcare by providing valuable insights and improving patient outcomes through data-driven decision-making. However, the ethical concerns surrounding privacy, equity, fairness, and safety remain to be addressed and the governance of digital health remains underdeveloped, leading to inconsistent standards across stakeholders like researchers, clinicians, and technologists. Current efforts, such as frameworks focusing on participant privacy, access, data management, and ethical principles, are essential first steps but need refinement to ensure the responsible use of digital health technologies. Addressing these challenges is complex, with major obstacles such as the lack of standardization in regulatory and ethical frameworks that requires bridging the knowledge/framework gaps among technology developers, healthcare providers, and regulatory bodies to ensure that AI models are trained on representative datasets and that their outputs are interpretable. Another challenge lies in enhancing digital literacy across all stakeholders, including patients, clinicians, and policymakers to ensure informed participation and decision-making in this rapidly evolving field. Moreover, balancing cost-benefit considerations with equitable access to these technologies remains a pressing issue, especially for underserved populations.
Future research should focus on developing robust frameworks for ethical AI deployment in healthcare, ensuring fairness in algorithmic design, and addressing the biases that exists within AI systems. Investigating the long-term societal impacts of digital health technologies on health equity, as well as improving transparency in AI-driven decision-making, are also important areas for future study. Furthermore, exploring international collaborations and their implications for data governance could provide valuable insights into creating more inclusive and secure digital health ecosystems. Establishing clearer ethical and regulatory guidelines will be vital to harnessing the full potential of digital health while safeguarding patient rights and well-being.
References:
- Patients’ perspectives related to ethical issues and risks in precision medicine: a systematic review
- Patients’ and professionals’ views related to ethical issues in precision medicine: a mixed research synthesis
- Practical Challenges for Commercial Enterprises in the Ethics Review Process for Digital Health Research: Document Analysis and Interview Study
- The ethical challenges facing the widespread adoption of digital healthcare technology
- Building the case for actionable ethics in digital health research supported by artificial intelligence
- Ethical Issues in Precision Medicine
- Ethical issues in biomedical research using electronic health records: a systematic review
- Using digital health to enable ethical health research in conflict and other humanitarian settings
- Emerging ethical issues regarding digital health data. On the World Medical Association Draft Declaration on Ethical Considerations Regarding Health Databases and Biobanks
- Centers for Disease Control and Prevention. (2024). Equity in digital health: Addressing disparities through technology
- Digital Health Australia. (2024). Ethical frameworks reduce risk and improve safety when implementing AI in healthcare
- HITRUST Alliance. (2024). The ethics of AI in healthcare
- National Center for Biotechnology Information. (2023). Addressing safety and usability in digital health interventions
- National Center for Biotechnology Information. (2022). Conflict of interest in digital health research: Ethical considerations
- Nature Communications. (2024). Algorithmic fairness in digital health: Challenges and solutions
- World Health Organization. (2023). Ethics and governance of artificial intelligence for health