Interpretability, or the ability to explain why and how a system makes a decision, can help us improve models, satisfy regulations, and build better products. Black-box techniques, such as deep learning, have delivered breakthrough capabilities at the cost of interpretability. In this report we show how to make models interpretable without sacrificing their capabilities or accuracy.
Refractor shows how interpretability opens up new product possibilities for machine learning applications. It predicts churn probabilities for telecom customers and shows which customer attributes are contributing to those predictions.
This is a report and prototype preview. For full access to all of our reports and prototypes contact us about becoming a subscriber.