Python Library for Model Interpretation/Explanations

Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system often needed for real world use-cases(** we are actively working towards to enabling faithful interpretability for all forms models). It is an open source python library designed to demystify the learned structures of a black box model both globally(inference on the basis of a complete data set) and locally(inference about an individual prediction). The project was started […]

Read more

Python Individual Conditional Expectation Plot Toolbox

Python Individual Conditional Expectation Plot Toolbox A Python implementation of individual conditional expecation plots inspired by R’s ICEbox. Individual conditional expectation plots were introduced in Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation (arXiv:1309.6392). Quickstart pycebox is available on PyPI and can be installed with pip install pycebox. The tutorial recreates the first example in the above paper using pycebox. Development For easy development and prototyping using IPython notebooks, a Docker environment is included. […]

Read more

Python implementation of R package breakDown

Python implementation of breakDown package (https://github.com/pbiecek/breakDown). Docs: https://pybreakdown.readthedocs.io. Requirements Nothing fancy, just python 3.5.2+ and pip. Installation Install directly from github git clone https://github.com/bondyra/pyBreakDown cd ./pyBreakDown python3 setup.py install # (or use pip install . instead) Basic usage Load dataset from sklearn import datasets x = datasets.load_boston() feature_names = x.feature_names Prepare model model = tree.DecisionTreeRegressor() Train model

Read more

Learning to Explain: An Information-Theoretic Perspective on Model Interpretation

Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018, by Jianbo Chen, Mitchell Stern, Martin J. Wainwright, Michael I. Jordan. Dependencies The code for L2X runs with Python and requires Tensorflow of version 1.2.1 or higher and Keras of version 2.0 or higher. Please pip install the following packages: numpy tensorflow keras pandas nltk Or you may run the following and in shell to install the required packages: git […]

Read more

Lime: Explaining the predictions of any machine learning classifier

This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data) or images, with a package called lime (short for local interpretable model-agnostic explanations). Lime is based on the work presented in this paper (bibtex here for citation). Here is a link to the promo video: Our plan is to add more packages that […]

Read more

A Python package which helps to debug machine learning classifiers and explain their predictions

ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support for the following machine learning frameworks and packages: scikit-learn. Currently ELI5 allows to explain weights and predictions of scikit-learn linear classifiers and regressors, print decision trees as text or as SVG, show feature importances and explain predictions of decision trees and tree-based ensembles. ELI5 understands text processing utilities from scikit-learn and can highlight text data accordingly. Pipeline and FeatureUnion are supported. […]

Read more

A game theoretic approach to explain the output of any machine learning model

SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations). Install SHAP can be installed from either PyPI or conda-forge: pip install shap or conda install -c conda-forge shap Tree ensemble example (XGBoost/LightGBM/CatBoost/scikit-learn/pyspark models) While SHAP can explain the output of any machine learning model, we […]

Read more

Scanning your Conda environment for security vulnerabilities

You don’t want to deploy an application that has security vulnerabilities. That means your own code, but also third-party dependencies: it doesn’t matter how secure your code is if it’s exposing a TLS socket with a version of OpenSSL that has a remote code execution vulnerability. For pip-based Python applications, you’d usually run vulnerability scanners on Python dependencies like Django, and on system packages like OpenSSL. With Conda, however, the situation is a little different: Conda combines both types of […]

Read more

Explainability Requires Interactivity In Python

This repository contains the code to train all custom models used in the paper Explainability Requires Interactivity as well as to create all static explanations (heat maps and generative). For our interactive framework, see the sister repositor. Precomputed generative explanations are located at static_generative_explanations. Requirements Install the conda environment via conda env create -f env.yml (depending on your system you might need to change some versions, e.g. for pytorch, cudatoolkit and pytorch-lightning). For some parts you will need the FairFace […]

Read more
1 449 450 451 452 453 928