Skip to content

Commit

Permalink
upload boexplain
Browse files Browse the repository at this point in the history
  • Loading branch information
brandonlockhart committed Feb 11, 2021
1 parent 6ddefbb commit a933bad
Show file tree
Hide file tree
Showing 68 changed files with 92,574 additions and 2 deletions.
9 changes: 9 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
MIT License

Copyright (c) 2021, Brandon Lockhart

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
99 changes: 97 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,97 @@
# BOExplain
Explaining Inference Queries with Bayesian Optimization
# BOExplain, Explaining Inference Queries with Bayesian Optimization

BOExplain is a library for explaining inference queries with Bayesian optimization. The corresponding paper can be found at https://arxiv.org/abs/2102.05308.

## Installation

```
pip install boexplain
```

## Documentation

The documentation is available at [https://sfu-db.github.io/BOExplain/](https://sfu-db.github.io/BOExplain/). (shortcut to [fmin](https://sfu-db.github.io/BOExplain/api_reference/boexplain.files.search.html#boexplain.files.search.fmin), [fmax](https://sfu-db.github.io/BOExplain/api_reference/boexplain.files.search.html#boexplain.files.search.fmax))

## Getting Started

Derive an explanation for why the predicted rate of having an income over $50K is higher for men compared to women in the UCI ML [Adult dataset](https://archive.ics.uci.edu/ml/datasets/adult).

1. Load the data and prepare it for ML.
``` python
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split

df = pd.read_csv("adult.data",
names=[
"Age", "Workclass", "fnlwgt", "Education",
"Education-Num", "Marital Status", "Occupation",
"Relationship", "Race", "Gender", "Capital Gain",
"Capital Loss", "Hours per week", "Country", "Income"
],
na_values=" ?")

df['Income'].replace({" <=50K": 0, ' >50K': 1}, inplace=True)
df['Gender'].replace({" Male": 0, ' Female': 1}, inplace=True)
df = pd.get_dummies(df)

train, test = train_test_split(df, test_size=0.2)
test = test.drop(columns='Income')
```

2. Define the objective function that trains a random forest classifier and queries the ratio of predicted rates of having an income over $50K between men and women.
``` python
def obj(train_filtered):
rf = RandomForestClassifier(n_estimators=13, random_state=0)
rf.fit(train_filtered.drop(columns='Income'), train_filtered['Income'])
test["prediction"] = rf.predict(test)
rates = test.groupby("Gender")["prediction"].sum() / test.groupby("Gender")["prediction"].size()
test.drop(columns='prediction', inplace=True)
return rates[0] / rates[1]
```


3. Use the function `fmin` to minimize the objective function.
``` python
from boexplain import fmin

train_filtered = fmin(
data=train,
f=obj,
columns=["Age", "Education-Num"],
runtime=30,
)
```
<!-- which returns a predicate 28 <= Age <= 59 and 6 <= Education-Num <= 16. Removing the tuples satisfying the returned predicate reduces the ratio from 3.54 to 2.7. -->

## Reproduce the Experiments

To reproduce the experiments, you can clone the repo and create a poetry environment (install [Poetry](https://python-poetry.org/docs/#installation)). Run

```bash
poetry install
```

To setup the poetry environment a for jupyter notebook, run

```bash
poetry run ipython kernel install --name=boexplain
```

An ipython kernel has been created for this environemnt.

### Adult Experiment

To reproduce the results of the Adult experiment and recreate Figure 6, follow the instruction in [adult.ipynb](https://github.com/sfu-db/BOExplain/blob/main/adult.ipynb).

### Credit Experiment

To reproduce the results of the Credit experiment and recreate Figure 8, follow the instruction in [credit.ipynb](https://github.com/sfu-db/BOExplain/blob/main/credit.ipynb).

### House Experiment

To reproduce the results of the House experiment and recreate Figure 7, follow the instruction in [house.ipynb](https://github.com/sfu-db/BOExplain/blob/main/house.ipynb).

### Scorpion Synthetic Data Experiment

To reproduce the results of the experiment with Scorpion's synthetic data and corresponding query, and recreate Figure 4, follow the instruction in [scorpion.ipynb](https://github.com/sfu-db/BOExplain/blob/main/scorpion.ipynb).
533 changes: 533 additions & 0 deletions adult.ipynb

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions boexplain/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from .files import fmin, fmax

__all__ = ["fmin", "fmax"]
3 changes: 3 additions & 0 deletions boexplain/files/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from .search import fmin, fmax

__all__ = ["fmin", "fmax"]
17 changes: 17 additions & 0 deletions boexplain/files/cat_xform.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import pandas as pd
import numpy as np


def individual_contribution(df, objective, cat_cols, **kwargs):
# dictionary of dictionaries, one dictionary for each column
# dictinary keys are the categorical values and the values are the individual contribution
# for each value in the column, compute the individual contribution of that column
# ie, remove tuples satisfying the single-clause predicate 'col=val',
# and compute the objective function with this data

cat_val_to_indiv_cont = {
col: {val: objective(df[df[col] != val], **kwargs) for val in df[col].unique()}
for col in cat_cols
}

return cat_val_to_indiv_cont
Loading

0 comments on commit a933bad

Please sign in to comment.