On Tuesday, May 30, at 10:30 a.m. PST, Data Scientists Pramit Choudhary and Aaron Kramer will host a live webinar, "Striking a Balance Between Model Performance and Interpretability." The webinar follows the release of Skater, a Python package for model interpretation created by Choudhary and Kramer, and will cover how the package aids data science practitioners in understanding the inner workings of predicitive models and why that process is important.
The webinar will begin with an exploration of common roadblocks to building an efficient predictive model, including the common trade off data scientists make between accuracy and interpretability. Data scientists often opt for simple models that are easier to interpret, even though neural networks or deep learning algorithms may sometimes produce the most accurate outputs.
This is where model interpretation comes in — it’s a series of techniques that unpacks how complex predictive models arrive at the outputs that drive decision-making in data savvy organizations. Next Tuesday’s webinar will dive into the two major types of model interpretation: global and local. Global interpretation relies on partial dependence plots, which visually demonstrate the modeled relationship between a predictor and a target. Meanwhile, local interpretation is best exemplified by the open source LIME framework, which stands for Local Interpretable Model-Agnostic Explanations.
Finally, Kramer and Choudhary will share a demo of Skater — our own open source model-agnostic framework for interpreting models. The demo will include examples of how Skater APIs can be used in classification and regression use cases. Attendees of the webinar will also learn how Skater consistently explains either deployed or in-memory models, and how it enables trust and transparency in the modeling process.