Start Interpreting Complex Models In Python With Skater
Skater is Python library designed to demystify the inner workings of complex or black-box models. Skater uses a number of techniques, including partial dependence plots and local interpretable model agnostic explanation (LIME), to clarify the relationships between the data a model receives and the outputs it produces. With Skater, interpret models both before and after they are deployed into production.
Model interpretation is the process of understanding how a model makes a prediction based on the data it receives. With this information, you can establish trust in a model’s outputs, identify hidden feature interactions, and compare how different models make decisions. This process is especially useful in applications such as credit risk modeling, where a data scientist might have to explain why a model denied a customer a credit card.
Often, practitioners will select simple modeling techniques — such as linear regression or decision trees — over more accurate neural networks and ensembles because those models are easier to interpret. Skater aims to eliminate this compromise by providing a common interpretation framework for all models, regardless of complexity or the algorithm, language, or library used to build them.