Learning to Explain: An Information-Theoretic Perspective on Model Interpretation

Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018, by Jianbo Chen, Mitchell Stern, Martin J. Wainwright, Michael I. Jordan. Dependencies The code for L2X runs with Python and requires Tensorflow of version 1.2.1 or higher and Keras of version 2.0 or higher. Please pip install the following packages: numpy tensorflow keras pandas nltk Or you may run the following and in shell to install the required packages: git […]

Read more

Lime: Explaining the predictions of any machine learning classifier

This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data) or images, with a package called lime (short for local interpretable model-agnostic explanations). Lime is based on the work presented in this paper (bibtex here for citation). Here is a link to the promo video: Our plan is to add more packages that […]

Read more

A Python package which helps to debug machine learning classifiers and explain their predictions

ELI5 is a Python package which helps to debug machine learning classifiers and explain their predictions. It provides support for the following machine learning frameworks and packages: scikit-learn. Currently ELI5 allows to explain weights and predictions of scikit-learn linear classifiers and regressors, print decision trees as text or as SVG, show feature importances and explain predictions of decision trees and tree-based ensembles. ELI5 understands text processing utilities from scikit-learn and can highlight text data accordingly. Pipeline and FeatureUnion are supported. […]

Read more

A game theoretic approach to explain the output of any machine learning model

SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations). Install SHAP can be installed from either PyPI or conda-forge: pip install shap or conda install -c conda-forge shap Tree ensemble example (XGBoost/LightGBM/CatBoost/scikit-learn/pyspark models) While SHAP can explain the output of any machine learning model, we […]

Read more

Scanning your Conda environment for security vulnerabilities

You don’t want to deploy an application that has security vulnerabilities. That means your own code, but also third-party dependencies: it doesn’t matter how secure your code is if it’s exposing a TLS socket with a version of OpenSSL that has a remote code execution vulnerability. For pip-based Python applications, you’d usually run vulnerability scanners on Python dependencies like Django, and on system packages like OpenSSL. With Conda, however, the situation is a little different: Conda combines both types of […]

Read more

Explainability Requires Interactivity In Python

This repository contains the code to train all custom models used in the paper Explainability Requires Interactivity as well as to create all static explanations (heat maps and generative). For our interactive framework, see the sister repositor. Precomputed generative explanations are located at static_generative_explanations. Requirements Install the conda environment via conda env create -f env.yml (depending on your system you might need to change some versions, e.g. for pytorch, cudatoolkit and pytorch-lightning). For some parts you will need the FairFace […]

Read more

The VeriNet toolkit for verification of neural networks

The VeriNet toolkit is a state-of-the-art sound and complete symbolic interval propagation based toolkit for verification of neural networks. VeriNet won second place overall and was the most performing among toolkits not using GPUs in the 2nd international verification of neural networks competition. VeriNet is devloped at the Verification of Autonomous Systems (VAS) group, Imperial College London. Relevant Publications. VeriNet is developed as part of the following publications: Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search DEEPSPLIT: An […]

Read more

A python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters

This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters. The OpenMC sources are ring sources which reduces the computational cost and the settings.xml file size. The equations implemented here are taken from this paper. Installation To install openmc-plasma-source, simply run: pip install openmc-plasma-source Basic usage from openmc_plasma_source import Plasma # create a plasma source

Read more

Finding LCM using python from scratch

Here, I write a practice program to find the LCM i.e Lowest Common Multiplication of two numbers using python without library. Requirements No Special Requirements Contribution I have strong belief on open source contribution and I promote the open source contribution. So, I invite you all to add new features in this game and also simplify the code as you can. Come here and show your talent. Authors and acknowledgment Author Name: Sachin Vinayak Dabhade License GNU – Open for […]

Read more
1 8 9 10 11 12 48