Historical Overview
The long and now rapidly flowing Artificial Intelligence (AI) river which courses through the global technoscape has several milestones worth noting, especially as they impact the current speed and course of this river. And it is both the speed and direction to which this paper intends to focus. As most readers know, AI originated in the early 1950s, with the work of Frank Rosenblatt [1] and his concept of the perceptron, or neural network to mimic the brain. This concept, in turn, was extended by Geoffrey Hinton in the 1980s [2] to that of the multi-layered neural network, which eventuated in the first self-driving car – built by one Dean Pomerleau.[3]. A decade later Yan Lecun [4] extended the system to recognise handwritten digits. From there the river slowed somewhat until 2006 [5] when superfast chips and massive datasets unleashed the power of Hinton’s algorithms and AI began to more easily identify images, recognize speech and support language translation. Some six decades later, in 2012, machine learning and neural nets became front page news when Hinton [6] was able to demonstrate on the 10 million image data set of ImageNet a reduction in image classification errors by 20%. This has led to substantive adoption of AI in industry and finance, not to mention medical education.
Schools of Thought
Currently discussion surrounding the nature and function of AI is replete with fierce evangelists on both sides of what is regarded by many as a two-edged sword. One side sees AI as having the potential to enhance human capacity and transform the way we live, work and experience life on the planet. This includes AI researchers like Geoffrey Hinton, Yoshua Bengio, Yan Lecun, Andrew Ng, Chris Bishop, Max Tegmark and Ray Kurzweill [7]. The other side consists of those who warn of the very imminent danger of AI, with respect to loss of individual freedom and control, as well as the destruction of many jobs and livelihoods within the global economy. Included in this group are the likes of Elon Musk, Stuart Russell, Martin Ford[8], Nick Bostrom, Sam Harris, Peter Haas and Yuval Harari[9]. The core concern of the latter group is the worry that we are building AI systems about which we make incorrect assumptions re: the capabilities and intelligence of AI – which in narrow fields is already light years ahead of humans, i.e., computation, reading ability, pattern recognition, to name a few; and that without proper ethical guidelines, principles and processes, we may in fact be building something which could lead to our extermination, and/ or replace the many practical advantages of democracy, because, ironically, they do what we ask them to do to the detriment of humanity. As Norbert Wiener presaged in 1960: ”put a purpose into a machine, better be absolutely certain the purpose is what you desire.”[10]
Current Capabilities and Behaviours of AI
While we accept that Advanced General Intelligence (AGI) may remain a distinct and distant possibility, we also argue that it is important to highlight and keep in view what the current behaviors and capabilities of Narrow AI are, while working towards both understanding and controlling their evolution and application, especially within the Health Services[11-13] and the Medical Education sectors. Building on the general approaches and successes of Machine Learning, i.e., supervised, unsupervised and reinforcement learning, it is increasingly accepted in the Health Sciences that the applications gaining most traction and funding, both for ongoing research, development and application are in four specific areas: (i) recommending what one should buy online, (ii) spotting spam and detecting credit card fraud, (iii) recognizing who and what is in a photo, and (iv) interacting with virtual assistants like Alexa and Siri. Increasingly these AI behaviors are spilling over into medicine in a number of areas. For example, the photo identification in images and video is leading to speedier identification of pathologies and supporting assistive prediction. Another example is its application in biometrics, enabling real time tracking for diagnosis and management of chronic diseases. Perhaps more interesting is that is being used in areas where doctor shortages exist and patient needs are growing. Yet another application is through the use of elder care robots that can detect and interpret signals from the brains of the elderly to support both patient and caregiver.
There is compelling data to support the idea that the recent advances in both deep learning, i.e., unsupervised systems, could have significant impact on how our social, political and economic narratives evolve. Increasingly, more questions are being raised with respect to the impact upon and control over systems and services formally managed by humans. The arguments build around the rapid advances in Advanced General Intelligence (AGI), and the attendant requirements for the training and monitoring of AI. These significant evolutionary improvements in machine and deep learning techniques, as well as the inability of developers to explain how and why AI is generating its own operational biases, suggest that adoption of AI reflects a clearer understanding of principles and processes guiding such research directions.
Medical Education
Medical Education is education relevant to human health among any type of learner, including health professionals, students in the health professions, and patients.
With respect to Medical Education and our own research in this area, Natural Artificial Intelligence (NAI) enables a number of innovative and paradigm shifting experiences. For the first time we can now explore the extent to which an AI tutor can support learning and teaching. We are now looking at being able to program AI to support personalized and adaptive learning for each and every learner. Specifically, within a fully digitized curriculum we are examining the extent to which AI can now dynamically generate learner profiles, based on extensive knowledge of student prior learning history, current learning activities within accessed learning resources, knowledge gained through quizzes and assessment results – including Work Placed Based Assessments. With this information we are then able to explore the degree to which AI can interact with learners to provide targeted content, meaningful feedback and dynamic visualization of curriculum progress and associated mastery of specific and general competency associated with becoming a medical practitioner. Of particular interest to us is the degree to which the current affordances of AI can then be extended to support the application of such tracked performance to digital learning such as Virtual Patients[14]
Conclusion
The paper highlighted key directions research and implementation of AI has taken to date in health services and medical education. It also underlined the degree to which current supervised and unsupervised learning on the part of AI agents may impact medical systems, services and training, and suggested strongly that moral and ethical responsibilities consonant with generally accepted medical values and principles need be evident before AI gains traction in medical decision making, an area that directly impacts quality of care and patient safety.
Paul Gagnon & Nabil Zary
References
Ford, M., Rise of the Robots: Technology and the Threat of a Jobless Future. . 2016, New York, NY, USA: Basic Books Inc.
Harari, Y.N., Homo Deus: A Brief History of Tomorrow. 2016, London Harvill Secker.Wiener, N., Some Moral and Technical Consequences of Automation. Science, 1960. 131(3410): p. 1355-1358.
Hinton, G.E., S. Osindero, and Y.-W. Teh, A Fast Learning Algorithm for Deep Belief Nets. Neural Computation, 2006. 18(7): p. 1527-1554.
Jiang F, J.Y., Zhi H, et al Artificial intelligence in healthcare: past, present and future Stroke and Vascular Neurology 2017.
Krizhevsky, A., I. Sutskever, and G.E. Hinton. Imagenet classification with deep convolutional neural networks. in Advances in neural information processing systems. 2012.
Kurzweil, R., The Singularity is near: When Humans Transcend Biology. 2005, New York: Viking.
LeCun, Y., et al., Backpropagation applied to handwritten zip code recognition. Neural computation, 1989. 1(4): p. 541-551
Pomerleau, D.A., ALVINN: an autonomous land vehicle in a neural network, in Advances in neural information processing systems 1, S.T. David, Editor. 1989, Morgan Kaufmann Publishers Inc. p. 305-313.
Rosenblatt, F., The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 1958. 65(6): p. 386.
Rumelhart, D.E., G.E. Hinton, and R.J. Williams, Learning representations by back-propagating errors. nature, 1986. 323(6088): p. 533.
Wahl B, C.-G.A., Germann S, et al Artificial intelligence (AI) and global health: how can AI contribute to health in resource-poor settings? BMJ Global Health 2018(3:e000798).
Comments