Machine Learning and Clinical Decision Making

by Nick

Posted on 14 May, 2017

I wrote a response to Siddhartha Mukherjee’s article “A.I. vs. M.D.” that appeared in the New Yorker last month. While I submitted it as a letter to the editor, they didn’t publish it. In retrospect, perhaps it was a bit long-winded for their curt and pithy letters section. Mukherjee’s article was published on the heels of Evan submitting his latest work on improving the consistency of infectious disease prediction using interpretable model averaging methods. What follows is the letter I submitted.

Siddhartha Mukherjee nicely characterizes both sides in a looming paradigm shift in clinical practice: how to incorporate the promises of “big data” into decision-making and diagnoses that impact real people (“A.I. vs. M.D.”, April 3rd). In doing so, he exposes a rift not just between machine learners (who cling to their data and models) and clinicians (who often trust in anecdote and experience), but a rift in the machine learning community itself.

Many “deep learning” algorithms are, as Mukherjee describes, black boxes whose promise to “replace dermatologists and radiologists” would understandably rankle even very forward-thinking diagnosticians. When life-and-death decisions are being made, it is a lot to ask anyone to trust the black box.

However, not all predictive models are or need to be opaque to the experts they are designed to aid. Many predictive models (for example, certain “model averaging” approaches) can provide more information to clinicians and patients about how they work, and why they succeed and fail. Development and investigation of transparent approaches can and should be emphasized over other more opaque approaches that thwart interpretability.

While data-driven approaches deserve to make their way into clinical and public-health decision-making, the benefits for patients will only be maximized if clinicians, biostatisticians, and computer scientists partner closely. To succeed, future efforts must focus on enhancing existing intuitive, interpretable predictive models and finding ways to peel back the layers of complexity of others. With transparency will come trust, and improved clinical care.