Releases: interpretml/DiCE
Releases · interpretml/DiCE
Rolling out DiCE for sklearn and regression models
- [Major] DiCE now supports sklearn models. Added three model-agnostic methods: randomized, genetic algorithm, and kd-tree
- [Major] Support for regression and multi-class problems
- [Major] Added local and global feature importance scores based on counterfactuals
- [Major] Better support for customizing counterfactuals through
features_to_varyandpermitted_rangeparameters for both continuous and categorical features - [Refactor] ML Model and DiCE Explainer can use different feature transformations. Model's transformation can be provided as an input to the
dice_ml.Modelconstructor. DiCE accepts inputs in the original data frame and does its transformations internally - Enhanced tests for the library
- Deep learning libraries (tensorflow and pytorch) marked as optional dependencies
- New notebooks showing applications of DiCE in
docs/source/notebooks/
A big thanks to @raam93, @soundarya98 and @gaugup for this release!
v0.4 : Faster VAE-based method for CFs, Pytorch and Tensorflow 2.x support
Here's the latest stable version.
- DiCE now supports Pytorch, Tensorflow 1.x and 2.
- Includes a Variational AutoEncoder-based method to generate counterfactual examples, based on https://arxiv.org/abs/1912.03277. This method is much faster--try it out!
- Support for private data, when only aggregate training data statistics are available to generate counterfactuals
- Updated and faster post-hoc sparsity enhancer module for counterfactuals
- Includes bug fixes for DiCE and tests for most DiCE functionalities.
- More notebooks and detailed docs at http://interpret.ml/DiCE/
Big thanks to @raam93 for leading the updates, and to @divyat09 for adding the VAE method.
First release
Supports counterfactual explanations for tensorflow and pytorch classifiers.