Article Text

Download PDFPDF
Limits of trust in medical AI
  1. Joshua James Hatherley
  1. School of Historical, Philosophical, and International Studies, Monash University, Clayton, VIC 3194, Australia
  1. Correspondence to Joshua James Hatherley, School of Historical, Philosophical, and International Studies, Monash University, Clayton, VIC 3194, Australia; joshua.hatherley{at}monash.edu

Abstract

Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.

  • ethics
  • information technology
  • quality of health care

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Footnotes

  • Contributors JJH is the sole author.

  • Funding Research for this paper was funded through Australian Government Research Training Program Scholarship.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Other content recommended for you