Machine Learning Interpretability

Sayak Paul

Sayak Paul

Kolkata, West Bengal

6 0
  • 0 Collaborators

Benchmarking and interpretability experiments to make the most out of machine learning models. ...learn more

Project status: Under Development

Artificial Intelligence

Intel Technologies
AI DevCloud / Xeon, Intel Opt ML/DL Framework, Intel Python

Code Samples [1]

Overview / Usage

Machine learning models are harder to interpret than to build. This poses a severe issue when it comes to using machine learning models proactively specifically in domains like healthcare, judicial etc. The evaluation metrics stand invalid if a SOTA model is not interpretable. Because a model would not want to bias its predictions towards any gender, race and so on.

This project is an attempt to show that machine learning models are interpretable and it is even possible to take a blackbox machine learning model and generate explanations out of it. It is one of the use-cases of machine learning for understanding the data itself rather than predictive modeling.

Methodology / Approach

  • Generated benchmarking scores with fastai, sklearn, h2o and intepret
  • Generated explanations using DT surrogates to create explanations for an h2o model
  • Used the interpret library to create explanations out of an off-the-shelf blackbox model

Technologies Used

  • Python
  • fastai
  • h2o
  • interpret
  • sklearn

Repository

https://github.com/sayakpaul/Benchmarking-and-MLI-experiments-on-the-Adult-dataset

Comments (0)