top of page
Search
  • Medha Tripathi

A Critique of Artificial Intelligence in Medicine

As society advances towards greater task automation via technology, the applications of artificial intelligence (AI) have become increasingly complex. IBM, a leading technology company, defines AI as a tool that “leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind” (What is). The implementation of AI in various sectors of modern society has served to both increase efficiency of completing tasks, while also posing several pertinent liabilities. In the context of medicine, the application of AI will ultimately incur various detrimental effects to the industry. These effects include, but are not limited to, confidentiality issues, algorithmically induced biases, heightened socioeconomic inequity, and the dehumanization of medicine.

A large threat posed by AI in the medical field is the increased privacy concerns associated with access to sensitive patient data. Patient confidentiality is highly emphasized throughout all medical training and legally enforced through laws such as HIPAA. Patients and providers alike are apprehensive of the security of AI. An article published by the National Institute of Health explores the privacy concerns associated with AI in three countries and states that “frequent incidents of personal information infringement, and breaches in medical data have eroded the public’s confidence in data processing” (Wang). The article further elaborates on the dangers of storing and analyzing all medical data in AI-based electronic records, as hackers or others with malicious intent could gain access and mine data or potentially alter AI feedback. Dr. Cohen, a physician representing the American Medical Association in a podcast interview, echoes the cybersecurity concerns voiced by the National Institute of Health. When asked about the increased usage of AI in medicine he mentions how “the number of breaches, the most breaches, and the potential harm from those breaches of the most personal data is really devastating” (“The State”). Physicians have partnered with technology companies in hopes of increasing security surrounding patient data, but all parties acknowledge that safety concerns have grown with the implementation of AI. An additional risk is how this technology is often owned by private companies or startups. This could result in corporations having “an increased role in obtaining, utilizing, and protecting patient health information,” according to Biomed Central Medical Ethics (Murdoch). Consequently, this makes the AI corporations a large stakeholder in medicine, and these corporations must be tightly, legally regulated to ensure that patient data remains safe in the hands of private entities. 

Another concern associated with the application of AI in medicine is the biases and discrimination that may be inherent in algorithms. The Medical Imaging and Data Resource Center (MIDRC) addresses the concerns associated with the increased usage of AI in healthcare, and claims that there are “over 30 sources of potential bias,” with specific examples including systemic bias, loss of situational awareness bias, sampling bias, and so forth (“AI/ML”). This is likely a result of AI ultimately lacking the human ability to make critical decisions and collaborate with colleagues and teams the way a provider team can. Instead, AI recognizes patterns and algorithms and provides inferences based on the information it has been given. The hesitation of utilizing AI due to the biases is also shared by faculty Trishan Panch and Heather Mattie at Harvard T.H. Chan School of Public Health. Panch and Mattie describe how algorithmic bias would affect medicine, since AI uses existing data that includes “inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation” to build algorithms which will “amplif[y] inequities in health systems” (J. Igoe). An example of this is the Framingham Heart Study, where the cardiovascular risk score predicted by AI was more accurate for Caucasian patients compared to African American patients. This was a result of less data available on African Americans compared to Caucasians, which is the case for all underrepresented patient populations. Hence, AI can perpetuate a systemic bias, as the data is limited for various groups that may already be undergoing other socioeconomic barriers to healthcare. Medicine is continually evolving in its approach to equity and inclusivity, but an article published by ScienceDirect argues that AI would exacerbate disparities by reinforcing patterns that existed in the past (Straw). Although there has been amplified awareness to expand the demographic framework to encompass more races, sexualities, and gender identities, there “are still many treatment and drug recommendations that are based on studies drawn from samples of young white men” (Straw). Therefore, allowing AI to draw from these limited data cohorts without designing it to acknowledge these inequities will cause AI to produce biased results. 

However, one may contemplate whether the implementation of AI in healthcare will accompany benefits that outweigh the drawbacks. One benefit is AI’s capabilities in diagnostic imaging. When analyzing the pros and cons of AI in radiology, V7 Labs acknowledged the “enhanced analysis, generating 3D models, [and] quicker results” as positive consequences of utilizing AI (Sajid). However, the article soon counters with how AI diagnostic imaging is accompanied with lack of standardization, explainability, and validation datasets. It may cause a breach of privacy as mentioned earlier. Another benefit of AI is its ability to quickly formulate a treatment plan for patients. Dr. Tom Purdie and Dr. Chris McIntosh, both advocates of AI in medicine, claim that AI “can create high-quality treatment plans in just minutes” for potential candidates of radiation (“AI Treatment”). Although this would reduce the time devoted to curating a treatment plan, AI is more prone to produce false negative or false negative results compared to a clinician without AI (Bernstein). A PubMed Central article begins this debate by acknowledging advantages that AI may provide, such as “reduc[ing] the number of interruptions a nurse endures during a shift, help[ing] with diagnosis, and handl[ing] mundane tasks such as scheduling and tracking the number of available beds” (O’Neill). This may streamline the efficiency of simpler tasks, but the five private-sector leaders still emphasize how professionals are irreplaceable, since they have the intuition and awareness that AI lacks. Thus, the application of AI in medicine would holistically be disadvantageous, despite its few perks. 

The implementation of AI in medicine would also likely reduce the number of hospital workers, which would raise numerous socioeconomic issues. An economic study endorsed by The White House explores how unemployment will surge as “machines begin to replace human workers,” which is certainly applicable to healthcare. These jobs that can be automated include medical coders, medical transcriptionists, laboratory technologists, medical schedulers, medical collectors, pharmacy technicians, and so forth (“Top 10”). The aforementioned jobs are amongst the medical professions that are shorter training periods and cost less than medical school.  With AI potentially replacing these workers, people will be required to seek further education or specialization to find a job in the changing market (Kingson). Advanced schooling is often not an option for those below the poverty line, which is why these workers will be disproportionately affected by AI compared to the rest of the population. Therefore, medical automation will harm people with lower socioeconomic statuses. 

Most importantly, AI would diminish the interpersonal relationships formed between patients and providers. Both healthcare providers and patients are apprehensive of AI, with a recent Pew Research Center poll finding that “60% of Americans would be uncomfortable with provider[s] relying on AI in their own health care” (Nadeem).  A cornerstone of medicine is the trust the patient has in the healthcare system and faculty, and it is evident that patients are weary of AI playing a role in their care. Further, AI is unable to notice tells or read body language in patients, which physicians rely on to better understand their patient’s concerns. According to Ph.D. Carol Goman, AI is unable to accurately interpret body language, as it consists of understanding “nonverbal cues, the context-dependent interpretations, the need for emotional intelligence, and the ever-evolving nature of human interactions” (Goman). A provider can understand the patient’s tendencies due to the long-term relationship they form with the patient based on confidentiality and trust, but utilizing AI in place would dehumanize the medical field. Tangential to AI being poor at interpreting human tells, AI also lacks the ability to make ethical decisions for patients that a provider can, especially during challenging situations. Medical personnel require intensive training throughout medical school and residency to appropriately attend to moral dilemmas and must also possess the emotional capacity to inform patients and provide support. AI technology does not possess the training to adapt to individualized situations, emotional intelligence, or the ability to overcome the aforementioned shortcoming of being unbiased. Patients trusting their medical system, providers being able to understand nonverbal cues, and medical staff being able to assist in ethical issues are all contingent on strong patient-provider relationships, which is not replicable with AI. 

In final analysis, the adoption of AI in the healthcare sector is likely to give rise to a number of challenges. These include less secure patient data, yielding biased results, disproportionately harming those of lower socioeconomic standing, and being unable to replace the nuanced emotional capacities of a medical worker. While it is undeniable that AI has the potential to transform various sectors of society, its integration into healthcare demands careful deliberation. The prospective benefits must be considered against the risks to ensure that AI ultimately contributes to the best interests of the patients, healthcare providers, and society. 


References

“AI Treatment Plans Used in Patients.” UHN Research, www.uhnresearch.ca/news/ai-

“AI/ML Bias Awareness Tool — MIDRC.” MIDRC, www.midrc.org/bias-awareness-tool.

“The State of AI, Cyber Security and Health Data Privacy in Medicine With Larry Cohen,

“Top 10 Healthcare Jobs That AI Will Displace.” Health Journal, 12 Feb. 2023,

Bernstein, Michael H., et al. “Can Incorrect Artificial Intelligence (AI) Results Impact

Radiologists, and if so, What Can We Do About It? A Multi-reader Pilot Study of Lung

Cancer Detection With Chest Radiography.” European Radiology, Springer Science+Business Media, 2 June 2023, https://doi.org/10.1007/s00330-023-09747-1.

Goman, Carol Kinsey. “Why AI Can’t Read Body Language . . . Yet.” Forbes, 27 July 2023, www.forbes.com/sites/carolkinseygoman/2023/07/27/why-ai-cant-read-body- language----yet/?sh=351f7a181e58.

J. Igoe, Katherine. “Algorithmic Bias in Health Care Exacerbates Social Inequities — How to Prevent It.” Executive and Continuing Professional Education, 12 Mar. 2021, www.hsph.harvard.edu/ecpe/how-to-prevent-algorithmic-bias-in-health-care.

Kingson, Jennifer. “AI And Robots Fuel New Job Displacement Fears.” Axios, 2 Apr. 2023, www.axios.com/2023/03/29/robots-jobs-chatgpt-generative-ai.

Murdoch, Blake. “Privacy and Artificial Intelligence: Challenges for Protecting Health Information in a New Era.” BMC Medical Ethics, vol. 22, no. 1, Springer Science+Business Media, 15 Sept. 2021, https://doi.org/10.1186/s12910-021-00687-3.

Nadeem, Reem. “How Americans View Use of AI in Health Care and Medicine by Doctors and Other Providers | Pew Research Center.” Pew Research Center Science & Society, 6 Mar. 2023, www.pewresearch.org/science/2023/02/22/60-of-americans- would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care.

O’Neill, Caroline. “Is AI a Threat or Benefit to Health Workers?” Canadian Medical Association Journal, vol. E732, no. 20, Canadian Medical Association, 22 May 2017, https://doi.org/10.1503/cmaj.1095428.

Sajid, Haziqa. “AI In Radiology: Pros and Cons, Applications, and 4 Examples.” V7, 20 Apr. 2023, www.v7labs.com/blog/ai-in-radiology#:~:text=Radiology%20is%20the% 20field%20of,an%20assistive%20diagnosis%20for%20patients.

Straw, Isabel. “The Automation of Bias in Medical Artificial Intelligence (AI): Decoding the Past to Create a Better Future.” Artificial Intelligence in Medicine, vol. 101965, Elsevier BV, 1 Nov. 2020, https://doi.org/10.1016/j.artmed.2020.101965.

Wang, Chao, et al. “Privacy Protection in Using Artificial Intelligence for Healthcare: Chinese Regulation in Comparative Perspective.” Healthcare, vol. 1878, no. 10, Multidisciplinary Digital Publishing Institute, 27 Sept. 2022, https://doi.org/10.3390/healthcare10101878.

What Is Artificial Intelligence (AI) ? | IBM. www.ibm.com/topics/artificial-intelligence.

12 views0 comments
Post: Blog2_Post
bottom of page