Artificial Intelligence (AI) and Machine Learning (ML) application development is on an exponential trajectory as evidenced by the growth rate of the AI market [1]. AI and ML are being utilized in healthcare for prevention, diagnosis, prognosis, treatment and in various other ways such as complimenting a physician’s activities. The vast amounts of data they can process and their ever increasing compute power give them an edge over human experts. For example the Apple watch can pour over millions of patient data to become an effective atrial fibrillation detector[2]. There is a natural concern about the ethics such applications should adhere to, if at all. Whether there should be a framework or guidelines the stakeholders of such tech should be held responsible for? The rapid pace of development in the field of AI & ML, its exceptionalism, novelty and uncertainty, present new challenges in the field of bioethics[3]. In this blog, the ethical concerns and implications of AI & ML in Healthcare and the attempts to address it have been discussed. Firstly the Ethical concerns have been delineated, following which attempts to mitigate and address them have been explored.

Ethical Concerns

Bias

AI and ML algorithms are known to reproduce the bias present in the data they are trained on. The socioeconomic inequalities and bias representative of our society are latent in such data and tend to make the algorithms biased leading to differential outputs across social groups. For instance Ziad Oberymeyer et al. [4] found that by utilizing health cost as a feature, an algorithm reduces the number of black patients identified for extra care by more than half.
In Dermatology, a study [5] found that most ML programs are largely learning on light skin and concluded that this would lead them to under perform on skin of color.
Another study found that most of the AI algorithms being trained in the US for clinical applications came from California, Massachusetts, and New York [6] introducing geographical bias.
Mitigating such biases in algorithms is challenging and has many constraints. The impossibility theorem [7] [8] which states that we cannot make statistical algorithms fair across all important fairness metrics simultaneously and must instead pick a subset is an illusration.
Another illustrative challenge is that the self learning algorithms can learn their own flawed outputs and become biased overtime if left unchecked.

Equal Access

The development of AI and ML applications is happening disproportionately among countries and is skewed towards the richer ones. [9] This is one of the reasons leading to the issue of equal access.
The cost of robotics for human augmentation, the bionic eye etc., might lead us to believe that in the future those with more money would be at a significant advantage while accessing such life changing solutions.
On the contrary, AI and ML solutions in Prognosis can lead to better access to palliative care for underrepresented groups[10]. Reminding clinicians to have serious illness conversations with patients utilizing an intervention using ML mortality risk predictions led to a significant increase in the objective.[11] This in turn is expected to lead to an increase in palliative care and better resource allocation.

Benefit

The goal of such applications in medicine is to benefit the patients as noted by the basic principles of medical ethics [12]. Challenging dilemmas like the benefit of sharing the information of a highly accurate AI tool for prognosis with patients and clinicians can vary for different diseases and have to be studied.

Privacy and Security

In a talk organized by the editor in chief of the American Journal of Bioethics, Michelle Mello, J.D, PhD says that “The data of patients doesn’t belong to them” [3] and the law is clear on this, even though this might go against intuition. Nevertheless, to resolve this, she mentions that, common interest groups can raise concerns for non exploitative use of patient data, ensure that user agreements are more transparent and that the data is not being used in ways they are uncomfortable with. For instance, the majority would be uncomfortable with an AI/ML app sharing their health analysis with an insurance company as it can have negative personal consequences.
The use of such data by AI & ML applications also raises the question of who benefits from the use of it. Are the physicians whose diagnosis gets incorporated in Electronic Health Records (EHR) of patients entitled to the profits of the AI/ML applications trained on such EHRs?

Responsibility

Who is to be held responsible for the wrong outputs of such applications? Multiple stakeholders such as the clinicians, the industry, the community make this a nuanced issue. Moreover, AI solutions many times utilize deep learning which is known to be a black box. This hinders in understanding its inner working and further confounds why it came to the contentious results in the first place.
Let us get an overview of the work being done to address such concerns.

Mitigating Bias and Ethical Frameworks

Efforts to mitigate bias include ensuring that the data being inputted is inclusive of different groups, being aware of the decisions being made upstream such as what topics or issues to address with AI/ML in the first place, participation and engagement and an ever evolving comprehensive approach for the same.
Over the years organizations and researchers have set out to outline many different sets of principles with the purpose that these would serve as ethical guidelines for AI & ML and would be utilized to better address any arising ethical concerns.
A Unified Framework of Five Principles for AI in Society [13] proposes a common set of five principles that are found in most of the proposed set of principles taken from highly reputable sources. It is of particular importance that 4 of these principles already form the basic principles of bioethics:

  1. Beneficence: The notion that the application/solutions will promote well being, preserve dignity and sustain the planet
  2. NonMaleficence: Applications developed will do no harm. The AI will not infringe on personal privacy and will operate in secure constraints.
  3. Autonomy: The relation between human and AI autonomy is proposed as a zero sum game. One will have to be traded off for the other. It is the question of who will make the decision. However, most studies agreed that any delegation to AI should be overridable.
  4. Justice: The idea that AI will promote fairness, accessibility and prosperity across social groups. This translates to the need to tackle bias in data and AI/ML algorithms.
    A new principle added by the authors:
  5. Explicability: Incorporates intelligibility to tackle the question of how an AI/ML solution works and accountability. These principles help address the issue of responsibility.
    Regulations help in enforcing the foundations of such ethical principles and have to be kept up to speed with the fast changing landscape of AI. Regulatory bodies such as the Food and Drug Administration have shown interest and released its guidelines to regulate clinical AI decision tools [14].
    There is a need for operationalizing the ethical principles to be applied in the stages of development and deployment of an AI/ML application. Researchers such as Danton S. Char et. al. [15] provides a pipeline model framework that provides developers with a tool to consider ethical questions and decisions throughout the stages of development of an AI/ML solution in healthcare. These stages include conception, development, calibration and implementation.
    This framework additionally facilitates interdisciplinary dialogue and collaboration to manage and understand the implications once such an AI/ML solution is implemented. Efforts such as the one outlined above are needed to mitigate our ethical concerns in practice

Conclusion

The advent of novel AI/ML solutions in the space of healthcare has brought with it the need to understand and address the related ethical implications. Well established bioethical principles along with new principles have been proposed to characterize and under- stand these issues. This in no way means that the principles are sound and comprehensive as the applications themselves are relatively new. AI and ML tools and solutions will change the way we think, behave and operate in healthcare and thus an evolving ethical framework to preserve the bioethical principles is ever necessary.
Bias already existed in medical procedures and applications before the age of AI. The new solutions being offered should be aimed at reducing bias instead of magnifying it. Moreover, new sources of bias have to be controlled.
It is all the more necessary to highlight the fact that AI and ML breakthroughs bring with them the promise of reducing bias, increasing access, and the immense value of improvement in healthcare. We benefit from not shunning or dissuading their development, but instead striving to make them conform to our human ethical principles.

References

[1] AI Market. urlhttps://www.bloomberg.com/press-releases/2022-06-13/artificial- intelligence-market-usd-1-581-70-billion-by-2030-growing-at-a-cagr-of-38-0- valuates-reports.
[2] Marco V. Perez et al. “Large-Scale Assessment of a Smartwatch to Identify Atrial Fibrillation”. In: New England Journal of Medicine 381.20 (2019). PMID: 31722151, pp. 1909–1917. DOI: 10 . 1056 / NEJMoa1901183. eprint: https : / / doi . org/10.1056/NEJMoa1901183.URL:https://doi.org/10.1056/ NEJMoa1901183.
[3] David Magnus PhD. Ethics of AI in healthcare. Youtube. 2021. URL: https://www. youtube.com/watch?v=Irg3jGxa6HM&t=3132s.
[4] Ziad Obermeyer et al. “Dissecting racial bias in an algorithm used to manage the health of populations”. In: Science 366.6464 (2019), pp. 447–453. DOI: 10.1126/ science.aax2342. eprint: https://www.science.org/doi/pdf/10.1126/ science.aax2342. URL: https://www.science.org/doi/abs/10.1126/ science.aax2342.
[5] Adewole S. Adamson and Avery Smith. “Machine Learning and Health Care Disparties in Dermatology”. In: JAMA Dermatology 154.11 (Nov. 2018), pp. 1247–1248. ISSN: 2168-6068. DOI: 10.1001/jamadermatol.2018.2348. eprint: https: //jamanetwork.com/journals/jamadermatology/articlepdf/2688587/ jamadermatology \ _adamson \ _2018 \ _vp \ _180011 . pdf. URL: https : //doi.org/10.1001/jamadermatol.2018.2348.
[6] Amit Kaushal, Russ Altman, and Curt Langlotz. “Geographic Distribution of US Cohorts Used to Train Deep Learning Algorithms”. In: JAMA 324.12 (Sept. 2020), pp. 1212– 1213. ISSN: 0098-7484. DOI: 10 . 1001 / jama . 2020 . 12067. eprint: https : / / jamanetwork . com / journals / jama / articlepdf / 2770833 / jama \ _kaushal_2020_ld_200073_1600712104.82262.pdf. URL: https: //doi.org/10.1001/jama.2020.12067.
[7] Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. 2016. arXiv: 1610.07524 [stat.AP].
[8] Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. 2016. arXiv: 1609.05807 [cs.LG].
[9] How Artificial Intelligence Could Widen the Gap Between Rich and Poor Nations. urlhttps://www.imf.org/en/Blogs/Articles/2020/12/02/blog-how-artificial-intelligence- could-widen-the-gap-between-rich-and-poor-nations.
[10] PhD Matthew DeCamp MD. Ethics, Bias Artificial Intelligence in Healthcare. Youtube. 2023. URL: https://www.youtube.com/watch?v=2Pb7HuZlm2Y&t=2752s.
[11] Christopher R. Manz et al. “Effect of Integrating Machine Learning Mortality Estimates With Behavioral Nudges to Clinicians on Serious Illness Conversations Among Patients With Cancer: A Stepped-Wedge Cluster Randomized Clinical Trial”. In: JAMA Oncology 6.12 (Dec. 2020), e204759–e204759. ISSN: 2374-2437. DOI: 10 . 1001 / jamaoncol . 2020 . 4759. eprint: https : / / jamanetwork . com / journals / jamaoncology / articlepdf / 2771756 / jamaoncology \ _manz \ _2020_oi_200077_1645626707.70326.pdf. URL: https://doi.org/ 10.1001/jamaoncol.2020.4759.
[12] What are the Basic Principles of Medical Ethics? http : / / web . stanford . edu / class / siw198q / websites / reprotech / New % 20Ways % 20of % 20Making % 20Babies/EthicVoc.htm.
[13] Luciano Floridi and Josh Cowls. “A Unified Framework of Five Principles for AI in Society”. In: Harvard Data Science Review 1.1 (July 2019). https://hdsr.mitpress.mit.edu/pub/l
[14] FDA Releases Guidance on AI-Driven Clinical Decision Support Tools. urlhttps://healthitanalytics releases-guidance-on-ai-driven-clinical-decision-support-tools.
[15] Danton Char, Michael David Abràmoff, and Chris Feudtner. “Identifying Ethical Considerations for Machine Learning Healthcare Applications”. In: The American Journal of Bioethics 20 (2020), pp. 7–17.