If, as many assume, AI is to take over many organizational roles from humans, it would have to develop considerably from its current standing. MIT has defined ‘human-centered AI’ as “the design, development, and deployment of systems that learn from and collaborate with humans in a deep, meaningful way”. Therefore, to become less ‘human-centered’, you would need an AI landscape in which smart algorithms do all the heavy lifting. We asked experts working in the field about their thoughts on the role of humans in Machine Learning, and humans and the future of ML.
Remo, Senior Software Engineer at Apple, states: “ML as a part of software engineering is by definition human-centric. It is human-centric because it substitutes the work of humans. For example in the recommendation of music, the identification of spam, or in object recognition. ML is human-centric because it is often (or almost always) trained on human generated or at least labeled data. And it is human-centric because ML models are built and designed on top of people’s inductive biases about the world. At the same time it is not human (or human centric) because it does not reason or think. It spots patterns but does not “connect the dots.”
Indu Khatri, the Machine Learning Lead at financial giant HSBC echoed the point above: “I would break down ML problems into three parts. The first one being identifying the problem to which ML needs to be applied and defining a broad architecture about how ML models will solve the problem. The second stage is developing the ML models and the third stage is taking actions based on ML models, getting the feedback from your environment and improving the models further. Out of these three stages, I believe Stage one will always need some kind of human intervention. With the advent of AutoML the amount of human intervention is decreasing everyday. Finally, for decreasing the amount of human intervention in Stage three we would need to improve the sample efficiency of Reinforcement Learning so that our Models can map predictions to actions in a feasible way.”
Conversely, Staff ML / NLP Research Scientist at Stanford and former Senior Research Scientist at Uber AI, Piero Molino, suggested that moving away from a human-centric model would be a mistake. “I believe there are several friction points that can be automated, like the data/model interface, the model search, evaluation and monitoring, Ops in general. But what I would rather wish is for ML to become more human-centric in the sense that it should put more humans at the center of its strategy rather than increased efficiency, and that is achieved with more humans in the loop evaluation and data generation processes, more robustness and more fairness evaluations.”
Data Scientist at Gartner, Lavi Nigam, thinks we’re already fairly close to an AutoML model, which as the name suggests, would mean far less human-interaction needed. So much so, that we could only be a few years away from this coming to fruition. “When more and more intrinsic pieces of data science workflows are automated mathematically and programmatically. We already see AutoML models that can figure out the best model for your data with any constraint defined. As the AutoML advances, we will eventually see that human interference in model building will greatly reduce and result in more optimized models. Deployment automation and Model Tracking (both part of MLOps) are other areas of focus where human involvement will drastically reduce in the coming years.”
Shuo Zhang, Senior Machine Learning Engineer, Bose Corporation, agreed that we’re fairly close to an ML model which requires little human interaction. “There are many current techniques that make ML algorithms less dependent on human supervision by leveraging the intrinsic structures of large amounts of data, such as self supervision and unsupervised techniques.”
Two other experts we asked agreed that ML as a concept requires human intervention. Removing this wouldn’t be beneficial and could actually be the opposite. Jason Gauci, a Software Engineering Manager at Facebook stated that “ML should work hand-in-hand with people, not replace them or automate the things that they do without oversight.” This was somewhat echoed by Sean Xie, Director of AI at Pfizer: “Current technologies are still focused on solving narrowly defined and specific problems. There’s a long way to go to be less human-centric.”
If we were to move toward an AutoML/less human-centric model, how could it be done? Yaman Kumar, PhD Computer Science at the University of Buffalo suggested that you would need to “join forces with the philosophy, metaphysics and ethics department and see the field adopting human-centric vision right, left and centre. As long as both AI and philosophy departments are cutoff and work in their own silos, things will go on as-is. Recent times have shown green shoots where more and more people from philosophy backgrounds are entering the field and guiding key areas such as fairness in ML.”