The Misguided Expectations of Human Overseers in AI in Healthcare

Authors

DOI:

https://doi.org/10.15346/hc.v12i1.149

Abstract

This commentary proposes an idea based on the outcomes of collaborative workshops and ethnographic inquiry within hospital settings, exploring the dynamic interplay between medical practitioners (clinicians and nurses) and artificial intelligence (AI). The research reveals a poignant finding: the prevailing emphasis on ethical AI places undue strain on physicians, obligating them to engage in continuous 'digital literacy' training. This imposition not only exacerbates the existing burdens of healthcare professionals but also fosters a misguided sense of security, given their non-specialist status in software programming and AI comprehension. The investigation underscores the intricate challenges and ethical quandaries inherent in the human-AI partnership within the domain of healthcare. Furthermore, the notion of physicians as the 'human overseer,' regarded as a requisite component of 'ethical AI' per legislative mandates, is revealed to be somewhat fallacious, shifting a complex ethical dilemma towards individual responsibility, as not all clinicians in this loop possess the capacity to rebut AI outcomes or grasp the complexities of AI algorithms.

References

Downloads

Published

2026-01-13

How to Cite

The Misguided Expectations of Human Overseers in AI in Healthcare. (2026). Human Computation, 12(1). https://doi.org/10.15346/hc.v12i1.149