Alexander Jung
Assistant Professor, Aalto University, Finland
Associate Editor, IEEE Signal Processing Letters
Explainable Empirical Risk Minimization
The successful application of machine learning (ML) methods becomes increasingly dependent on their interpretability or explainability. Designing explainable ML systems is instrumental to ensuring transparency of automated decision-making that targets humans. The explainability of ML methods is also an essential ingredient for trustworthy artificial intelligence. A key challenge in ensuring explainability is its dependence on the specific human user (“explainee”).
The users of machine learning methods might have vastly different background knowledge about machine learning principles. One user might have a university degree in machine learning or related fields, while another user might have never received formal training in high-school mathematics. We measure explainability via the conditional entropy of predictions, given some user signal. This user signal might be obtained from user surveys or biophysical measurements.
We propose explainable empirical risk minimization (EERM) principle of learning a hypothesis that optimally balances between the subjective explainability and risk.
The EERM principle is flexible and can be combined with arbitrary machine learning models. We present several practical implementations of EERM for linear models and decision trees. Numerical experiments demonstrate the application of EERM to detecting the use of inappropriate language on social media.
Alexander Jung received the Ph.D. degree (with sub auspiciis) in 2012 from Technical University Vienna (TU Vienna). After Post-Doctoral periods at TU Vienna and ETH Zurich, he joined Aalto University as an Assistant Professor for Machine Learning in 2015. He leads the group “Machine Learning for Big Data” that studies explainable machine learning in network-structured data. Prof. Jung first-authored a paper that won a Best Student Paper Award at IEEE ICASSP 2011. He received an AWS Machine Learning Research Award and was the “Computer Science Teacher of the Year” at Aalto University in 2018. Currently, he serves as an associate editor for the IEEE Signal Processing Letters and as the chair of the IEEE Finland Jt. Chapter on Signal Processing and Circuits and Systems. He authored the textbook, Machine Learning: The Basics (Springer, 2022).
R.G. Goebel
University of Alberta, Canada
XAI-Lab in Edmonton, Alberta, Canada
Explanation as an essential component of machine-mediated acquisition of knowledge for predictive models
Explanation is not a recent invention precipitated by black-box predictive models, but rather a revival of the role of scientific explanation as a remedy to create trust and transparency for applications of machine learning. We note two strong trends in the grand challenge of the knowledge acquisition bottleneck, and propose that explanatory knowledge must be acquired concurrently in the process of supervised learning. The resource costs to do so must be balanced in a tradeoff of explainability and knowledge acquisition resources, e.g., as in federated learning systems.
R.G. (Randy) Goebel is Professor of Computing Science at the University of Alberta, and head of the XAI-Lab in Edmonton, Alberta, Canada, and concurrently holds the positions of Associate Vice President Research, and Associate Vice President Academic. He is also co-founder and principle investigator in the Alberta Innovates Centre for Machine Learning. He holds B.Sc., M.Sc. and Ph.D. degrees in computer science from the University of Regina, Alberta, and British Columbia, and has held faculty appointments at the University of Waterloo, University of Tokyo, Multimedia University (Malaysia), Hokkaido University, and has worked at a variety of research institutes around the world, including DFKI (Germany), NICTA (Australia), and NII (Tokyo), was most recently Chief Scientist at Alberta Innovates Technology Futures. His research interests include applications of machine learning to systems biology, visualization, and web mining, as well as work on natural language processing, web semantics, and belief revision. He has experience working on industrial research projects in scheduling, optimization, and natural language technology applications.
Matthew E. Taylor
Director, Intelligent Robot Learning Lab, Associate Professor & Graduate Admissions Chair, Computing Science
Fellow and Fellow-in-Residence, Alberta Machine Intelligence Institute
Canada CIFAR AI Chair, Amii
Reinforcement Learning in the Real World: Challenges and Opportunities for Human-Agent Interaction
While reinforcement learning (RL) has had many successes in video games and toy domains, recent success in high-impact problems shows that this mature technology can be useful in the real world. This talk will highlight some of these successes, with an emphasis on how RL is making an impact in commercial settings, as well as what problems remain before it can become plug-and-play like many supervised learning technologies. Further, we will argue that RL, like all current AI technology, is fundamentally a human-in-the-loop paradigm. This framing will help motivate why additional fundamental research at the interaction of humans and RL agents is critical to helping RL move out of the lab and into the hands of non-academic practitioners.
Matt Taylor is an Associate Professor of Computing Science at the University of Alberta, where he directs the Intelligent Robot Learning Lab. He is also a Fellow and Fellow-in-Residence at Amii (the Alberta Machine Intelligence Institute). His current research interests include fundamental improvements to reinforcement learning, applying reinforcement learning to real-world problems, and human-AI interaction. His book “Reinforcement Learning Applications for Real-World Data” by Osborne, Singh, and Taylor is aimed at practitioners without degrees in machine learning and has an expected release date of Summer 2022.