Digital Health Ethics
Digital health tools have the potential to improve healthcare delivery and outcomes, but they can also cause harm. Therefore, it is our responsibility to ensure that digital health solutions are ethical and do not cause harm.
From mobile medical apps and software that support the clinical decisions doctors make every day to artificial intelligence and machine learning, digital technology has been driving a revolution in health care. Digital health tools have the vast potential to improve our ability to accurately diagnose and treat disease and to enhance the delivery of health care for the individual. U.S. Food and Drug Administration.
Stakeholders Responsible for the Future of Digital Health
Ethical Principals
The ethical principles of autonomy, justice, beneficence, and non-maleficence guide decision-making in healthcare and research.
- Autonomy means that individuals have the right to make informed decisions for themselves.
- Justice means that the burdens and benefits of healthcare should be distributed equally among all groups in society.
- Beneficence means that the intent of good must outweigh the bad.
- Non-maleficence means that we must not cause harm to individuals or society.
Ethics by Design
An ethics by design approach helps us develop digital health solutions that are ethical and effective. It involves identifying and addressing ethical concerns during the design, development, and deployment of digital health solutions.
To implement an ethics by design approach, we need to create a culture of ethics that fosters intentional decision-making. This can be achieved by keeping ethical principles in mind, asking questions at each stage of development, and creating an environment that is conducive to ethical decision-making. By doing so, we can ensure that the benefits of digital health solutions outweigh the risk of harm to all people.
Key Takeaways of Digital Health Ethics
- Digital health solutions might seem insecure, but if backed by an ethical approach can make healthcare 10X better.
- Ethical principles of autonomy, justice, beneficence, and non-maleficence can serve as key-pillars in digital healthcare and research based decision making.
- An ethics by design approach helps us develop ethically-effective digital health solutions.
- To implement an ethics by design approach, we need to create a culture of ethics fostering intentional decision-making.
- With Ethical principles and responsibilities in place, Next99 is determined to create digital health products that solves health and health-related issues within a knack of time causing no harm to humanity.
Ethical Implications of Digital Health
Now that we have discussed the importance of ethics by design and the need for equitable digital health solutions, let’s explore some of the specific ethical implications of digital health.
Some potential ethical implications of digital health include:
- Privacy and security concerns – As Digital health products collect and store sensitive health information, it is essential to ensure privacy and data security. There is also risk of data breaches and the potential for discrimination based on health information.
- Bias and discrimination – Digital health solutions is potent enough to trigger existing biases and discrimination in healthcare. This can happen if a non-inclusive algorithm or design solutions is used to provide digital healthcare solutions.
- Informed consent – Digital health solutions may require informed consent from users for certain healthcare applications. It is essential to ensure that the obtained consent is meaningful, not coerced, and that the end user fully understands the implications while using the digital health tool/solution.
- Patient autonomy – While digital health solutions can help patients take informed decisions, there is also a potential for them to be too reliant on technological decisions, leading to loss of patient autonomy and decision making power.
- Responsibility and accountability – Digital health solutions is more transparent and hence it arises many questions on responsibility and accountability in case of potential consequences.
To address these ethical implications in digital health, it is essential to:
- Incorporate ethics by design principles throughout the development and deployment of digital health solutions.
- Ensure that privacy and security concerns are addressed and that data is kept secure and private.
- Promote diversity and inclusivity in the design of digital health solutions and in the data used to train algorithms.
- Obtain informed consent in a way that is meaningful, not coerced, and that the user fully understands the implications of using the digital solution.
- Empower patients to take control of their health while also ensuring that they are not overly reliant on technology.
- Ensure that responsibility and accountability are clearly defined and allocated in the use of digital health solutions.
Regulatory bodies play an essential role in addressing ethical implications in digital health by ensuring that digital health solutions are safe, effective, and meet ethical standards. In the United States, the Food and Drug Administration (FDA) regulates digital health solutions and has issued guidance on the development and use of these solutions.
Digital technology can either be a rising tide that raises all boats if we make it equitable… or it can be used much like it is now in most of our health systems … to segment populations to optimize the situation for some people, particularly those who are already digitally enabled.
Robert Califf, Commissioner of the FDA, Envisioning a Transformed Clinical Trials Enterprise for 2030
Key Takeaways of Ethical Implications:
- Digital health solutions can have potential ethical implications, such as privacy and security concerns, bias and discrimination, informed consent, patient autonomy, and responsibility and accountability.
- Addressing these ethical implications requires incorporating ethics by design principles throughout the development and deployment of digital health solutions.
- Regulatory bodies such as the FDA play an essential role in ensuring digital health solutions are safe, effective, and meet ethical standards.
Digital Health Inclusion
One of the major challenges in achieving digital health equity is the digital divide, which refers to the gap between those who can access, use, and afford digital health solutions and those who cannot.
Factors such as geographic location, socioeconomic status, and educational attainment impact the digital divide. Strategies for bridging this gap include ensuring accessibility, usability, and affordability of digital solutions.
- Health disparity refers to the differences in health outcomes or disease burden among different populations due to social, economic, or environmental factors.
- Healthcare disparity, on the other hand, refers to the unequal access, utilization, or affordability of healthcare services among different populations, as well as differences in health outcomes due to mistreatment or bias.
- Digital disparity refers to the unequal access, utilization, or affordability of digital technology and infrastructure, which can include barriers such as lack of internet connectivity, affordability of devices, and lack of digital literacy.
These factors can impact understanding of how to use technology and result in differences in digital health outcomes among different populations.
To truly understand the digital solutions that will work for your target population, it’s crucial to grasp their unique live experiences. Each individual’s experience with a given health condition is multifaceted and distinct. Engaging with your target population directly can provide deep insights into the social and digital influences on their health, and help you design digital health solutions that are meaningful and impactful for them.
The digitization of healthcare is all about data. New sources of digital data are being used to fundamentally reimagine what it means to care for people, leading to better outcomes and lower costs.
New sources of digital data, including internet searches, wearable devices, and social media, offer tremendous power and promise to revolutionize how we approach health and healthcare. These data are gathered from almost everywhere in our daily lives, leading to healthcare advancements in research, public health, personalized medicine, and more.
Data holds significant power that shapes how individuals experience the world around them. Data breaches and misuse can result in harmful, lasting impacts on people’s lives. It is important to always consider the risks for the people behind the data points and take an ethical approach to digital health.
Data security is the process of protecting personal data and systems from unauthorized access, misuse, or theft. Data privacy describes an individual’s right to govern how and under what circumstances their data is collected, stored, and used.
Privacy by design is an approach that promotes data protection through technology design, and it is essential in healthcare to embed privacy considerations in each step of technology development to support trustworthiness.
To apply ethics by design principles to data privacy and protection, it is important to take an ecosystem-level view, considering the risks of harm associated with the use of data by other individuals and organizations to whom you may grant access. It is not enough to minimize the privacy risks associated with the data you collect and the intended use of these data by your digital health solution.
Categorizing Harm in the use of Digital Health
To better understand the implications of generating and utilizing digital data in healthcare, it’s essential to ask thoughtful questions about the potential benefits and risks for all individuals involved. One effective approach is to classify the potential harms that can emerge from the use or misuse of data.
Patient safety is the absence of preventable harm to a patient during the process of health care and the reduction of risk of unnecessary harm associated with health care to an acceptable minimum. An acceptable minimum refers to the collective notions of given current knowledge, resources available, and the context in which care was delivered weighed against the risk of non-treatment or other treatment. World Health Organization
Key Takeaways of Minimizing Harm and Mitigating Risk in the Use of Digital Health Data:
Data Breach
Basic Security Protocols to Minimize Risk of Harm
First, it is important to recognize that data points represent real people and real consequences on their lives, and that patients must remain at the forefront of our minds as we develop new ways to utilize digital data streams. We must take an ecosystem-level view and consider the broader data ecosystem when applying ethics by design principles to data privacy and protection.
In terms of data security, it is essential to practice fit-for-purpose data security practices that safeguard individuals’ personal data from unauthorized or accidental access, processing, erasure, loss, or use, and minimize the harm that would affect the individual should there be a breach.
Good security practices are proactive, seeking to minimize the risk of a data breach.
Regarding data privacy, we must seek to maximize the benefits of new flows of data in the digital era of health while minimizing the risk of harm.
To do so, we must embed privacy considerations in each step of technology development, and consider what data is collected, how it is used, and how this collection and use are disclosed.
Finally, we must take an ethical approach to the generation and use of digital health data by weighing the benefits and risks of developing and deploying digital solutions, ensuring that the benefits outweigh the risks for all people, and prioritizing training and education to ensure that every member of our team understands what the risks are and how to prevent them.
The way user data has been treated has an emerging history of malfeasance.
– Andrea Coravos, Jennifer C. Goldsack, Daniel R. Karlin, Camille Nebeker, Eric Perakslis, Noah Zimmerman, and M. Kelley Erb, Digital Medicine: A Primer on Measurement
Potential and risks of AI and ML in Digital Health
Artificial Intelligence (AI) and Machine Learning (ML) have the potential to revolutionize healthcare by improving decision-making, preventing errors such as misdiagnosis, and providing personalized treatment for patients. However, the deployment of AI in healthcare requires an ethical approach that ensures its benefits outweigh the risks.
The promise of artificial intelligence in medicine is to provide composite, panoramic views of individuals’ medical data; to improve decision making; to avoid errors such as misdiagnosis and unnecessary procedures; to help in the ordering and interpretation of appropriate tests; and to recommend treatment
What is AI/ML in Healthcare?
AI applies advanced analysis and logic-based techniques, including ML, to interpret events, support and automate decisions, and take action in healthcare.
Why is Ethical AI in Healthcare Important?
- AI/ML can improve workflows, accelerate diagnosis, and enhance clinical decision-making, but its ethical deployment requires an “ethics-by-design” approach to ensure benefits outweigh risks.
- An effective applied ethics approach requires assessing the benefits and risks of harm in the four domains: Algorithmic Bias, Data Privacy and Security, Accountability and Responsibility, and Trust in AI.
Algorithmic Bias:
- Bias can creep into the process of creating algorithms at any stage, from study design to data collection, entry and cleaning, and model implementation and dissemination.
- An ethical approach should ensure that algorithms are developed and tested in a way that minimizes bias, and that all stakeholders are involved in developing and implementing algorithms.
Data Privacy and Security:
- AI requires vast amounts of data, and the ethical use of AI in healthcare necessitates protecting data privacy and security.
- Organizations should implement security protocols to ensure that personal information is protected, and that all data is processed in compliance with applicable laws and regulations.
Accountability and Responsibility
- Developers and users of AI systems should be accountable and responsible for their actions and decisions.
- An ethical approach should ensure that there is transparency and accountability in the development and deployment of AI systems.
Trust in AI
- Trust in AI can be undermined by ineffective, inaccurate, and unsafe algorithms.
- Organizations should monitor the performance of AI systems and provide visibility into their performance over time.
Earning Public Trust:
- The deployment of AI in healthcare requires an ethical approach that ensures the trust of the public.
- It is crucial to involve all stakeholders in the development and deployment of AI systems, including training, regulation, and awareness.
“Often in health care, it has been perceived that AI or machine learning algorithms are one and done. I build the rules and put it in production and forevermore it’s going to be great. But you have to have visibility into its performance across time.”
Andrew Merrill, Intermountain Healthcare, for STAT News
In conclusion, an ethical approach is crucial to ensure the benefits of AI/ML in healthcare outweigh the risks of harm for all people. By implementing an ethics-by-design approach that focuses on algorithmic bias, data privacy and security, accountability and responsibility, and trust in AI, we can build public trust and harness the full potential of AI in healthcare.