A.I. and the Medicine of the Future

Could A.I. revolutionise medical treatment?

Every generation needs an object of revolutionary fervour. For medicine at the end of the last century, it was the idea that all treatment should be founded on evidence. The revolutionaries demanded data behind each and every medical decision; the reactionaries argued that the void of data is precisely what a doctor’s expertise is supposed to fill. Those of us then only just entering the field simply wondered on what, if not evidence, medicine could have been based all this time.

As so often happens in the world of ideas, each side was wrong on a cardinal point they both agreed on. The error is not easy to see, so let’s proceed step-by-step.

The subject of medicine is the individual patient, its task to determine what to do in his or her specific case. Such judgment must naturally be drawn from the study of other patients, and so depends on how knowledge of individuals is distilled from knowledge of populations. This has traditionally been done with little formality, more often than not left as a matter of tacit experience.

___

"How smartphones unlock themselves on sight of your face will be how medicine unlocks your individual diagnosis and treatment on sight of your physiology"

___

 

Now the proponents of evidence-based medicine insisted clinical decisions must proceed from the statistical analysis of objectively determined population data. The more familiar of the two problems this introduces is the limit of what we can practically measure. Blood pressure is simple to record and quantify, the perfusion of one’s limbs is neither. Less familiar but far more constraining is the limit of statistical inference. The “significance” of an observation cannot be easily established unless it is described by a relatively small number of variables. The joint consequence is an endless hunt for “biomarkers”: isolated indices of disease or response to treatment one can both measure and subject to statistical analysis. When you ask your doctor to predict your individual risk of (say) a heart attack, it is from a handful of such biomarkers—blood pressure, cholesterol, etc.—that the prediction will be derived.

As anyone who has ever been a patient will testify, such predictions are rarely accurate. We need better “biomarkers”, the revolutionaries say. But must they always, even commonly, exist? Consider that most individuating of body parts: the human face. Below (figure 1) is the mean image of a highly selected, reasonably uniform cohort of people: all living cardinals of the Catholic Church.

 


 

cardinals cropped

Figure 1 - Mean image of Catholic Cardinals


 

Its blandness reminds us no single feature—e.g. the distance between the eyes—could plausibly distinguish each cardinal from every other, for individuality is here conveyed in the complex conjunction of an irreducibly large number of features. It would not help to add a thousand more cardinals, even if they existed, for the variability here is not noise but a reflection of the essential diversity of faces. And what is true of the face may be true of any other part of the body, not only in surface appearances but also in the causal mechanisms of health and disease. Nothing compels biological causality to be simple: it can operate thorough complex causal fields as opaque as those governing the evolution of hurricanes. Indeed, for all its natural reductiveness, contemporary medical science considers the causation of most major diseases to be multifactorial. Satisfying the prerequisites for evidence-based medicine, then, renders us faithless to our fundamental biological nature.

Be that as it may, both sides agree the individual complexities of human beings cannot be fully captured by any conventional statistical model. Yet they draw radically different conclusions. For the evidence-based revolutionary, loss of individuation is the sacrifice we must make to render our beliefs objectively qualifiable. For the old school reactionary, it implies medicine could only ever be evidence-based at the margins. One side says the fire is preferable to the frying pan, the other simply invites you to enjoy the heat.

To see why both sides are wrong, reconsider the problem of recognising faces. Is it by magic that having seen it once, I can reliably distinguish your face from every other? There need be nothing informal about the way the brain solves this and every other problem: the formality of the solution is merely obscured from view. Equally, nothing compels statistics to be simple other than a historical constraint on computational cost and science’s emotional attachment to intellectual economy. We can use statistics to identify faces, just not the statistics conventionally understood by the term.

What statistics is this? To call it machine learning, the popular term, is doubly misleading, for its mechanics are mere instruments of mathematics, and it infers, rather than retains, facts. Rather, we should think of it as inference for problems constitutionally lacking simple solutions, precisely the kind medicine must solve. Done correctly, it is no less formal or rigorous than conventional statistics, yet capable of powering models of biology complex enough to be properly individuating. Yes, it is commonly applied to digital engineering problems better dissolved than solved, such as the prediction of online browsing habits. But how smartphones unlock themselves on sight of your face will be how medicine unlocks your individual diagnosis and treatment on sight of your physiology. This is the future, and would have been the past, had we then had the power.

Yet another empty promise, you say, from a favoured fad of the current generation. But unlike many other promised advances this is readily and immediately testable. Taking as an example the most complex part of the body—the brain—and arguably the most important of its treatable diseases—stroke—we have recently shown that simple statistical models of drug trials struggle to detect even very large treatment effects. Drugs successful in experimental animals may thus so often fail in human trials not because the drug does not work but because conventional statistics are ill-suited to the task. Replace the statistics with advanced machine learning, and the drug’s true effect will be correctly revealed. We may be sitting on a dozen effective drugs, mothballed in error merely because we misinterpreted their trials.

So are we on the brink of another revolution? It is always hard to resist the thrill of the barricades, but since this innovation holds appeal for both factions, it ought to be reason, if anything, to dismantle them. It enables us to formalise more, not less, evidence yet humanise the process, rendering it more like the rich, implicit intelligence of a clinician. Each side can have the other’s cake and eat its own.

Still, it will not be easy. The individuating power of machine learning critically depends on the size and inclusiveness of the data it is trained on. Think of it as a pupil no teacher has ever encountered: infinitely attentive, incapable of boredom, yet a very slow learner. We shall need patience, resolute will, and substantial resource to create the right teaching environment for it. But it is here, in the centralised, top-down, NHS that characteristics so often misrepresented as defects could catalyse a transformation far harder to achieve in the more fragmented healthcare systems in operation elsewhere. Bevan will be smiling in his grave.


PN is funded by the Wellcome Trust, the Department of Health, and the UCLH NIHR Biomedical Research Centre

“High-dimensional therapeutic inference in the focally damaged human brain,” Tianbo Xu Hans Rolf Jäger Masud Husain Geraint Rees and Parashkev Nachev, Brain, online November 15, 2017. awx288, https://doi.org/10.1093/brain/awx288

 

Latest Releases
Join the conversation