When an AI model produces an output, it is reasonable to want to know why that output was given. Tracing back the origin of the output in terms of the insides of the model and the data present in the input is the process of explaining the workings of the model. This is the focus of explainable AI (XAI). It is much easier to do when the model has been built with explainability in mind, although traditional black-box models are not completely hopeless either.
This module will help you do the following:
E-books on explainable AI (pick at least one and browse it for a bit):
After having browsed the ebook(s), make a list of at least a half a dozen of possible real-world applications of AI (whether contemporary or speculations of future use). For each application, rate now crucial it is for the AI to be an XAI and what exactly needs to be explained in each setting. Again, please treat this as a thought exercise for yourself instead of something to look up online or prompt at LLM about.
After this module, you should be familiar with the following concepts:
Remember that you can always look concepts up in the glossary. Should anything be missing or insufficient, please report it.