Review - Artificial intelligence in clinical practice: Implications for physiotherapy education

Article: Artificial intelligence in clinical practice: Implications for physiotherapy education
Article status: review
Author: David Nicholls
Review date: 16th July 2018
DOI: to be allocated

Review (Nicholls) – Artificial intelligence in clinical practice: Implications for physiotherapy education


Thank you for a very interesting and engaging article on AI and its implications for clinical physiotherapy practice.

Overall I thought the article set out a strong argument for physiotherapy educators and regulators to think carefully about the way physiotherapy thinking and practice evolve over the near future.  Here are a few comments and thoughts on your argument.

Firstly, I wonder whether it is entirely accurate to portray the technological advance of human history in terms of three distinct periods: a pre-machine age, a first and second machine age?  Many historians, for instance, argue that the development of the printing press and the means of mass food production were at least as significant as the invention of steam-power.  And later inventions like electric light and telephony probably brought about different machine ‘ages’ in their own right.  Hence I think your argument that ‘For thousands of years the trajectory of human progress showed only a gradual increase over time with very little changing over the course of successive generations. If your ancestors spent their lives pushing ploughs there was a good chance that you would spend your life pushing a plough’ may be going too far in service of a particular rhetoric.

I also wonder if you create too much of a sense of epoch when you talk about ‘The introduction of machine power [making] a mockery of previous attempts to shape our physical environment and created the conditions for the mass production of material goods, ushering in the first machine age and fundamentally changing the course of history’.  I’m not sure it mocked previous cultures, as much as supplement them. Evidence for mockery would be useful if you hold to that view.

Throughout your paper, there is an implicit socio-political critique running that is unstated.  For instance, Marx and Engels argued that the Industrial Revolution created the conditions for worker exploitation and the creation of capitalism, and many have argued that the move to big data (mentioned later in your piece) is really a manifestation of corporate neoliberalism (big data capture and manipulation by the ‘Big Five’).  Is there a place here, then, for a critical comment on whether these machine ‘ages’ also further corporate interests over the personal? (It would be interesting, for instance, to comment on how much of the AI discourse is currently being driven by American-based military interests).

I wonder, too, whether there is an inadvertent binary created between machines and humans, ‘us’ and ‘them’ – particularly later in the article. You talk about healthcare professions being ‘replaced by more appropriate alternatives’, about learning ‘when to hand over control to them, and when to ignore them’ (my emphasis). You talk about us being deluded about how replaceable we are, and of finding a niche in areas that ‘computers find difficult – or impossible – to replicate’, and you situate this firmly in a traditional humanistic notion of ‘care’.  Surely this goes against your earlier argument that ‘we’ should learn to integrate better with the kinds of augmented intelligences that are now increasingly available; that we should be redefining the boundaries of ‘human’ thought and capabilities like reason, cognition, and even caring; and that we should cease to make artificial distinctions between humans and machines.

To that end, I wonder if a brief discussion of the possibilities of AI to radically reshape our notion of ourselves is in order.  It would be good to see if you think this is any different to the way people talked about telephones or the railways or looms in the past.  You mention, for instance, that people will need ‘technological literacy that enables them to understand the vocabulary of computer science and engineering that enables them to communicate with machines’.  But I’ve never needed to be an electrical engineer to use the phone, so why should things change now?  You also suggest that machines will be the repositories of vast amounts of information that humans cannot possibly handle. But, again, hasn’t this always been the case? Is not a library just a vast machine (repository) for the storage and retrieval of knowledge? We have, of course, seen times in the past when massive data on the population became necessary.  The birth of epidemiology and statistics in the 18th century was made necessary by the growth in urban (poor and rebellious) populations, and look how much data was generated from that that it was impossible for humans to hold in their heads.

Finally, there are one or two typos and grammatical errors within the text that can be ironed out in the final review of the paper. Thank you again for the provocation to think through these issues. I look forward to your response to these thoughts.

One Reply to “Review (Nicholls) – Artificial intelligence in clinical practice: Implications for physiotherapy education”

  1. Dear Dave

    Thank you for your insightful review of my article. For the most part, I agree with your comments and have made several changes to reflect your concerns.

    The categorisation of social development and population growth into three distinct ages (pre-machine, first machine and second machine) was used as a rhetorical device that I thought would help to make a clear point about the impact of certain technologies on society. Since it was not central to the argument I have removed reference to these “epochs”. However, I have left in the point that there is evidence that the emergence of AI-based technologies will have a major impact on health, educational and social contexts. I have also left the text that accompanies the graph (from Morris, 2011) much the same, since it is aligned with the data showing a sharp increase in human population growth and associated social development following the Industrial Revolution. While other technologies have made an impact on our lives I have not found any data to show that their contribution to our material well-being was more significant than the production of mechanical power with steam engines.

    I have attempted to tone down the rhetoric and removed the reference to “mockery”, which was meant to highlight the enormous difference between a human being’s capacity to generate force versus a machine powered by steam. It was not written to suggest a mockery of previous or different cultures. If this was confusing then it is probably best left out.

    I have also not elaborated on the theories of Marx or Engels, since that is beyond the scope of what I had intended to do in this article. I agree that a theoretical/critical position on these ideas would be interesting to explore, especially with respect to your comment about how the discourse is currently being shaped by governments, venture capital firms, entrepreneurs and software engineers. This might be linked to your suggestion to include “a brief discussion of the possibilities of AI to radically reshape our notion of ourselves”. Perhaps we could write a paper together?

    Your point about the binary created between “us” and “them” is an important comment, and one that I struggle to articulate well. On the one hand we are separate from machine intelligence, especially as they are currently implemented. AI-based algorithms run on hardware that is separate from a person, for example, in any smartphone. On the other hand, the fact that I don’t have to remember any phone numbers means that the retention and recall functions of my brain are augmented by the phone. The physical phone is separate to my physical person but the software on the phone enhances my cognitive abilities in many ways. There are times when I trust my own judgment (e.g. by ignoring suggested articles presented to me in a news feed) and other times when I hand over control to the phone (e.g. using GPS instead of my own sense of direction). I have tried to make the point that machines might end up being thought of as colleagues, and that we would share a collective intelligence with them, ignoring or accepting their suggestions at different times just as we would do the same to human colleagues.

    I’m not trying to advance a posthuman argument, which is where I think you were leading in some of your comments e.g. “redefining the boundaries of ‘human’ thought and capabilities” and “cease to make artificial distinctions between humans and machines”. I went through the text and can’t see where I’ve explicitly made these points. I agree that, in cases when an algorithm suggests a direction of management that is implemented by a person, it is difficult to say where the decision originated; the collective intelligence of the machine and human came to the decision together. But the purpose of this article was to highlight the practical implications of narrowly constrained AI as it relates to achieving very specific tasks. I suspect that your comments are aimed at where we may end up in the future, but for now I simply wanted to focus on the practical implications of having ever-smarter machines at our disposal. If I have misunderstood your comment, please do let me know.

    Your skepticism about the need to develop technological literacy makes sense, as long as you’re only talking about being a “user” of technology. I agree that you don’t need to be an engineer to use a phone. But when the apps on your phone start making choices on your behalf (for example, by deciding what information to show you in your news feed, or what route you should take when driving), isn’t there some value in understanding how those choices are being made? If we are to trust the choices made by AI-based systems in healthcare then can we afford to leave algorithm design to software engineers who have no background in patient care? In addition, when clinical care is deeply integrated with data science – as I believe it will be – it seems that clinicians who are able to use that streaming data in real-time (i.e. gather, filter, analyse and interpret it in order to inform decisions) will have better outcomes than those who cannot. In the past we only needed to know what to do with hardware; do I know what buttons to press, and in which order? Now that software is on the verge of making clinical decisions, surely we need clinicians who have the data and technological literacy to understand how those decisions were made? I’ve edited some parts of the article to try and make this argument more clearly.

    Your question around ‘big data’ repositories is unrelated to the point I was trying to make. ‘Big data’ is not a storage problem; it’s an analysis and interpretation problem. I agree that repositories of information (e.g. libraries) are not new. However, while no-one would suggest that you need to read and understand all of the information in a library in near real-time, there is ample evidence that 21st century clinicians will need to interpret patient-related information flowing from a variety of sources, and they will need to do this more quickly than is possible with human cognition alone. Either that or they will be forced to ignore it. I made the point here: “The exponential growth of these interconnected medical devices continuously generate a volume of data that simply cannot be analysed, interpreted, or even understood by human beings alone.” The problem is not how much information we can hold in our heads, in our libraries, or on our devices. The problem is that, for the first time, the amount of data being generated on and by the patient, exceeds our ability to understand it in the clinical time frame. For that, we will need AI-based assistance.

    Thank you again for your comments on the paper. I hope that my response has addressed your concerns, but if any remain, please do let me know.

This site uses Akismet to reduce spam. Learn how your comment data is processed.