Feature interpretation is a recent topic of interest in the AI/ML fields. Understanding and being able to disect a ML model was deferred to the ability to create a ML model for a specific purpose. Now, however, knowledge of a model's inner workings has been increasing in significance as security concerns as well as fine-tuning requirements have demanded a better understanding of what a ML model is actually doing. Feature interpretation refers to dissecting a model by identifying the specific features for which a ML model is looking and determining some purpose of the model from these features.
This repository begins to meld together some of the current model dissection tools currently available to create open-source feature interpretation tools.