After successfully segmenting the pavement defects into individual images, you have sent your colleagues, the asphalt repair team, to visit and label a training set. After that, Robin trained a deep-learning model to assign a severity to the images. You are very proud of this, but you were also expecting these types of issues (as indicated in the text message you received) since you already knew some of the residents of the town tend to think that their problem is more important than anyone else's.
You'd like to add a button to the pothole ranking list so that residents can see the photo of each defect along with an explanation of why it was assigned a specific severity. The first part (the photos of the defects) is easy to do, and Robin can set it up by the end of day, but the second part (the explanation of severity) is an obstacle because you have no idea why the deep-learning model gives these outputs, so there seems to be no way for you to make the process more transparent.
Explore and discuss alternatives for creating a severity rating that can be justified in terms that the residents' jogging club will (hopefully) find informative. Combine writing, diagrams, and pseudocode as needed in your response.
Browse through this Python XAI tutorial and discuss how applicable the suggested approaches could be for creating an explainable pavement defect classification system like the one we need.
Watch this 20-minute video tutorial by Gade et al. (in double speed if you're an impatient listener), and then summarize the key points in your own words. Attach a list of unfamiliar terms, if any.
Browse through the scientific article Obtaining genetics insights from deep learning via explainable artificial intelligence (Novakovsky et al., 2023). Then, discuss in writing how vital you find XAI to be in medical applications. Discuss any other applied fields in which you think explainability is not just nice-to-have, but actually critical for the employment of any AI applications.