From the course: Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions
Unlock the full course today
Join today to access over 24,700 courses taught by industry experts.
Comparing IML and XAI - KNIME Tutorial
From the course: Machine Learning and AI Foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions
Comparing IML and XAI
- [Instructor] We've mentioned that there are two distinct worldviews and approaches to providing transparency in our model's predictions. Let's start by discussing the explainable AI approach in more detail. Black box techniques, like deep learning and XG boost, are increasingly popular. They dominate the list of winning entries on machine learning competitions, like those on Kaggle, but their very nature makes them difficult to explain or interpret. For example, in medical AI, there are models that can accurately indicate the presence of disease, but it is often unclear, even to a doctor, how the model was able to make that determination. Yet, there are situations, including medical AI, where we need to be able to explain not only what the predictions are, but why that particular prediction was made. This is also true of the loans and reason codes that we discussed in the previous video. So the motivation for XAI is…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
Understanding the what and why your models predict4m 28s
-
(Locked)
Variable importance and reason codes2m 22s
-
(Locked)
Comparing IML and XAI4m 23s
-
(Locked)
Trends in AI making the XAI problem more prominent6m 18s
-
(Locked)
Local and global explanations2m 23s
-
(Locked)
XAI for debugging models2m 26s
-
(Locked)
KNIME support of global and local explanations2m 22s
-
-
-
-
-
-
-
-