A model is built to predict. To assure human the model's prediction is legit, I would demand two explanations:
Currently, there are quite a few great approaches, global vs. local.
Global: build a global model to directly explain the result (like linear model, decision tree)
Local: build a local model based on the interesting points, and then use its coefficient to explain the result.
- Why prediction on Xa is Ya? Why prediction on Xb is Yb?
- Why Ya is larger than Yb?
Currently, there are quite a few great approaches, global vs. local.
Global: build a global model to directly explain the result (like linear model, decision tree)
Local: build a local model based on the interesting points, and then use its coefficient to explain the result.
- LIME (model-agnostic explanation approach, two key papers listed below)
- https://arxiv.org/abs/1606.05386
- https://arxiv.org/abs/1602.04938v3
- https://arxiv.org/abs/1606.05386